Fast auto-focus scheme based on optical defocus fitting model
NASA Astrophysics Data System (ADS)
Wang, Yeru; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting; Cen, Min
2018-04-01
An optical defocus fitting model-based (ODFM) auto-focus scheme is proposed. Considering the basic optical defocus principle, the optical defocus fitting model is derived to approximate the potential-focus position. By this accurate modelling, the proposed auto-focus scheme can make the stepping motor approach the focal plane more accurately and rapidly. Two fitting positions are first determined for an arbitrary initial stepping motor position. Three images (initial image and two fitting images) at these positions are then collected to estimate the potential-focus position based on the proposed ODFM method. Around the estimated potential-focus position, two reference images are recorded. The auto-focus procedure is then completed by processing these two reference images and the potential-focus image to confirm the in-focus position using a contrast based method. Experimental results prove that the proposed scheme can complete auto-focus within only 5 to 7 steps with good performance even under low-light condition.
Continuous track paths reveal additive evidence integration in multistep decision making.
Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom
2017-10-03
Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.
Vieira, J; Cunha, M C
2011-01-01
This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.
A quantum dynamical study of the He++2He-->He2++He reaction
NASA Astrophysics Data System (ADS)
Xie, Junkai; Poirier, Bill; Gellene, Gregory I.
2003-11-01
The temperature dependent rate of the He++2He→He2++He three-body association reaction is studied using two complementary quantum dynamical models. Model I presumes a two-step, reverse Lindemann mechanism, where the intermediate energized complex, He2+*, is interpreted as the rotational resonance states of He2+. The energy and width of these resonances are determined via "exact" quantum calculation using highly accurate potential-energy curves. Model II uses an alternate quantum rate expression as the thermal average of the cumulative recombination probability, N(E). This microcanonical quantity is computed approximately, over the He2+ space only, with the third-body interaction modeled using a special type of absorbing potential. Because Model II implicitly incorporates both the two-step reverse Lindemann mechanism, and a one-step, reverse collision induced dissociation mechanism, the relative importance of the two formation mechanisms can be estimated by a comparison of the Model I and Model II results. For T<300 K, the reaction is found to be dominated by the two-step mechanism, and a formation rate in good agreement with the available experimental results is obtained with essentially no adjustable parameters in the theory. Interestingly, a nonmonotonic He2+ formation rate is observed, with a maximum identified near 25 K. This maximum is associated with just two reaction intermediate resonance states, the lowest energy states that can contribute significantly to the formation kinetics.
Klein tunneling in the α -T3 model
NASA Astrophysics Data System (ADS)
Illes, E.; Nicol, E. J.
2017-06-01
We investigate Klein tunneling for the α -T3 model, which interpolates between graphene and the dice lattice via parameter α . We study transmission across two types of electrostatic interfaces: sharp potential steps and sharp potential barriers. We find both interfaces to be perfectly transparent for normal incidence for the full range of the parameter α for both interfaces. For other angles of incidence, we find that transmission is enhanced with increasing α . For the dice lattice, we find perfect, all-angle transmission across a potential step for incoming electrons with energy equal to half of the height of the potential step. This is analogous to the "super", all-angle transmission reported for the dice lattice for Klein tunneling across a potential barrier.
A permeation theory for single-file ion channels: one- and two-step models.
Nelson, Peter Hugo
2011-04-28
How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no quantitative comparison has yet been made. The A/D model makes a network of predictions for how the elementary steps and the channel occupancy vary with both concentration and voltage. In addition, the proposed theoretical framework suggests a new way of plotting the energetics of the simulated system using a one-dimensional permeation coordinate that uses electric potential energy as a metric for the net fractional progress through the permeation mechanism. This approach has the potential to provide a quantitative connection between atomistic simulations and permeation experiments for the first time.
NASA Astrophysics Data System (ADS)
Coutu, S.; Rota, C.; Rossi, L.; Barry, D. A.
2011-12-01
Facades are protected by paints that contain biocides as protection against degradation. These biocides are leached by rainfall (albeit at low concentrations). At the city scale, however, the surface area of building facades is significant, and leached biocides are a potential environmental risk to receiving waters. A city-scale biocide-leaching model was developed based on two main steps. In the first step, laboratory experiments on a single facade were used to calibrate and validate a 1D, two-region phenomenological model of biocide leaching. The same data set was analyzed independently by another research group who found empirically that biocide leachate breakthrough curves were well represented by a sum of two exponentials. Interestingly, the two-region model was found analytically to reproduce this functional form as a special case. The second step in the method is site-specific, and involves upscaling the validated single facade model to a particular city. In this step, (i) GIS-based estimates of facade heights and areas are deduced using the city's cadastral data, (ii) facade flow is estimated using local meteorological data (rainfall, wind direction) and (iii) paint application rates are modeled as a stochastic process based on manufacturers' recommendations. The methodology was applied to Lausanne, Switzerland, a city of about 200,000 inhabitants. Approximately 30% of the annually applied mass of biocides was estimated to be released to the environment.
Testing a stepped care model for binge-eating disorder: a two-step randomized controlled trial.
Tasca, Giorgio A; Koszycki, Diana; Brugnera, Agostino; Chyurlia, Livia; Hammond, Nicole; Francis, Kylie; Ritchie, Kerri; Ivanova, Iryna; Proulx, Genevieve; Wilson, Brian; Beaulac, Julie; Bissada, Hany; Beasley, Erin; Mcquaid, Nancy; Grenon, Renee; Fortin-Langelier, Benjamin; Compare, Angelo; Balfour, Louise
2018-05-24
A stepped care approach involves patients first receiving low-intensity treatment followed by higher intensity treatment. This two-step randomized controlled trial investigated the efficacy of a sequential stepped care approach for the psychological treatment of binge-eating disorder (BED). In the first step, all participants with BED (n = 135) received unguided self-help (USH) based on a cognitive-behavioral therapy model. In the second step, participants who remained in the trial were randomized either to 16 weeks of group psychodynamic-interpersonal psychotherapy (GPIP) (n = 39) or to a no-treatment control condition (n = 46). Outcomes were assessed for USH in step 1, and then for step 2 up to 6-months post-treatment using multilevel regression slope discontinuity models. In the first step, USH resulted in large and statistically significant reductions in the frequency of binge eating. Statistically significant moderate to large reductions in eating disorder cognitions were also noted. In the second step, there was no difference in change in frequency of binge eating between GPIP and the control condition. Compared with controls, GPIP resulted in significant and large improvement in attachment avoidance and interpersonal problems. The findings indicated that a second step of a stepped care approach did not significantly reduce binge-eating symptoms beyond the effects of USH alone. The study provided some evidence for the second step potentially to reduce factors known to maintain binge eating in the long run, such as attachment avoidance and interpersonal problems.
A Semi-Empirical Two Step Carbon Corrosion Reaction Model in PEM Fuel Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Alan; Colbow, Vesna; Harvey, David
2013-01-01
The cathode CL of a polymer electrolyte membrane fuel cell (PEMFC) was exposed to high potentials, 1.0 to 1.4 V versus a reversible hydrogen electrode (RHE), that are typically encountered during start up/shut down operation. While both platinum dissolution and carbon corrosion occurred, the carbon corrosion effects were isolated and modeled. The presented model separates the carbon corrosion process into two reaction steps; (1) oxidation of the carbon surface to carbon-oxygen groups, and (2) further corrosion of the oxidized surface to carbon dioxide/monoxide. To oxidize and corrode the cathode catalyst carbon support, the CL was subjected to an accelerated stressmore » test cycled the potential from 0.6 VRHE to an upper potential limit (UPL) ranging from 0.9 to 1.4 VRHE at varying dwell times. The reaction rate constants and specific capacitances of carbon and platinum were fitted by evaluating the double layer capacitance (Cdl) trends. Carbon surface oxidation increased the Cdl due to increased specific capacitance for carbon surfaces with carbon-oxygen groups, while the second corrosion reaction decreased the Cdl due to loss of the overall carbon surface area. The first oxidation step differed between carbon types, while both reaction rate constants were found to have a dependency on UPL, temperature, and gas relative humidity.« less
NASA Astrophysics Data System (ADS)
Kawamura, M.; Umeda, K.; Ohi, T.; Ishimaru, T.; Niizato, T.; Yasue, K.; Makino, H.
2007-12-01
We have developed a formal evaluation method to assess the potential impact of natural phenomena (earthquakes and faulting; volcanism; uplift, subsidence, denudation and sedimentation; climatic and sea-level changes) on a High Level Radioactive Waste (HLW) Disposal System. In 2000, we had developed perturbation scenarios in a generic and conservative sense and illustrated the potential impact on a HLW disposal system. As results of the development of perturbation scenarios, two points were highlighted for consideration in subsequent work: improvement of the scenarios from the viewpoints of reality, transparency, traceability and consistency and avoiding extreme conservatism. Subsequently, we have thus developed a new procedure for describing such perturbation scenarios based on further studies of the characteristics of these natural perturbation phenomena in Japan. The approach to describing the perturbation scenario is effectively developed in five steps: Step 1: Description of potential process of phenomena and their impacts on the geological environment. Step 2: Characterization of potential changes of geological environment in terms of T-H-M-C (Thermal - Hydrological - Mechanical - Chemical) processes. The focus is on specific T-H-M-C parameters that influence geological barrier performance, utilizing the input from Step 1. Step 3: Classification of potential influences, based on similarity of T-H-M-C perturbations. This leads to development of perturbation scenarios to serve as a basis for consequence analysis. Step 4: Establishing models and parameters for performance assessment. Step 5: Calculation and assessment. This study focuses on identifying key T-H-M-C process associated with perturbations at Step 2. This framework has two advantages. First one is assuring maintenance of traceability during the scenario construction processes, facilitating the production and structuring of suitable records. The second is providing effective elicitation and organization of information from a wide range of investigations of earth sciences within a performance assessment context. In this framework, scenario development work proceeds in a stepwise manner, to ensure clear identification of the impact of processes associated with these phenomena on a HLW disposal system. Output is organized to create credible scenarios with required transparency, consistency, traceability and adequate conservatism. In this presentation, the potential impact of natural phenomena in the viewpoint of performance assessment for HLW disposal will be discussed and modeled using the approach.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari
2017-09-01
Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Liu, L.; Du, L.; Liao, Y.
2017-12-01
Based on the ensemble hindcast dataset of CSM1.1m by NCC, CMA, Bayesian merging models and a two-step statistical model are developed and employed to predict monthly grid/station precipitation in the Huaihe River China during summer at the lead-time of 1 to 3 months. The hindcast datasets span a period of 1991 to 2014. The skill of the two models is evaluated using area under the ROC curve (AUC) in a leave-one-out cross-validation framework, and is compared to the skill of CSM1.1m. CSM1.1m has highest skill for summer precipitation from April while lowest from May, and has highest skill for precipitation in June but lowest for precipitation in July. Compared with raw outputs of climate models, some schemes of the two approaches have higher skill for the prediction from March and May, but almost schemes have lower skill for prediction from April. Compared to two-step approach, one sampling scheme of Bayesian merging approach has higher skill for the prediction from March, but has lower skill from May. The results suggest that there is potential to apply the two statistical models for monthly precipitation forecast in summer from March and from May over Huaihe River basin, but is potential to apply CSM1.1m forecast from April. Finally, the summer runoff during 1991 to 2014 is simulated based on one hydrological model using the climate hindcast of CSM1.1m and the two statistical models.
The current matrix elements from HAL QCD method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-03-01
HAL QCD method is a method to construct a potential (HAL QCD potential) that reproduces the NN scattering phase shift faithful to the QCD. The HAL QCD potential is obtained from QCD by eliminating the degrees of freedom of quarks and gluons and leaving only two particular hadrons. Therefor, in the effective quantum mechanics of two nucleons defined by HAL QCD potential, the conserved current consists not only of the nucleon current but also an extra current originating from the potential (two-body current). Though the form of the two-body current is closely related to the potential, it is not straight forward to extract the former from the latter. In this work, we derive the the current matrix element formula in the quantum mechanics defined by the HAL QCD potential. As a first step, we focus on the non-relativistic case. To give an explicit example, we consider a second quantized non-relativistic two-channel coupling model which we refer to as the original model. From the original model, the HAL QCD potential for the open channel is constructed by eliminating the closed channel in the elastic two-particle scattering region. The current matrix element formula is derived by demanding the effective quantum mechanics defined by the HAL QCD potential to respond to the external field in the same way as the original two-channel coupling model.
Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki
2018-01-01
Leaching of hazardous trace elements from excavated urban soils during construction of cities has received considerable attention in recent years in Japan. A new concept, the pollution potential leaching index (PPLI), was applied to assess the risk of arsenic (As) leaching from excavated soils. Sequential leaching tests (SLT) with two liquid-to-solid (L/S) ratios (10 and 20Lkg -1 ) were conducted to determine the PPLI values, which represent the critical cumulative L/S ratios at which the average As concentrations in the cumulative leachates are reduced to critical values (10 or 5µgL -1 ). Two models (a logarithmic function model and an empirical two-site first-order leaching model) were compared to estimate the PPLI values. The fractionations of As before and after SLT were extracted according to a five-step sequential extraction procedure. Ten alkaline excavated soils were obtained from different construction projects in Japan. Although their total As contents were low (from 6.75 to 79.4mgkg -1 ), the As leaching was not negligible. Different L/S ratios at each step of the SLT had little influence on the cumulative As release or PPLI values. Experimentally determined PPLI values were in agreement with those from model estimations. A five-step SLT with an L/S of 10Lkg -1 at each step, combined with a logarithmic function fitting was suggested for the easy estimation of PPLI. Results of the sequential extraction procedure showed that large portions of more labile As fractions (non-specifically and specifically sorbed fractions) were removed during long-term leaching and so were small, but non-negligible, portions of strongly bound As fractions. Copyright © 2017 Elsevier Inc. All rights reserved.
Multistep Model of Cervical Cancer: Participation of miRNAs and Coding Genes
López, Angelica Judith Granados; López, Jesús Adrián
2014-01-01
Aberrant miRNA expression is well recognized as an important step in the development of cancer. Close to 70 microRNAs (miRNAs) have been implicated in cervical cancer up to now, nevertheless it is unknown if aberrant miRNA expression causes the onset of cervical cancer. One of the best ways to address this issue is through a multistep model of carcinogenesis. In the progression of cervical cancer there are three well-established steps to reach cancer that we used in the model proposed here. The first step of the model comprises the gene changes that occur in normal cells to be transformed into immortal cells (CIN 1), the second comprises immortal cell changes to tumorigenic cells (CIN 2), the third step includes cell changes to increase tumorigenic capacity (CIN 3), and the final step covers tumorigenic changes to carcinogenic cells. Altered miRNAs and their target genes are located in each one of the four steps of the multistep model of carcinogenesis. miRNA expression has shown discrepancies in different works; therefore, in this model we include miRNAs recording similar results in at least two studies. The present model is a useful insight into studying potential prognostic, diagnostic, and therapeutic miRNAs. PMID:25192291
Space station crew safety: Human factors interaction model
NASA Technical Reports Server (NTRS)
Cohen, M. M.; Junge, M. K.
1985-01-01
A model of the various human factors issues and interactions that might affect crew safety is developed. The first step addressed systematically the central question: How is this space station different from all other spacecraft? A wide range of possible issue was identified and researched. Five major topics of human factors issues that interacted with crew safety resulted: Protocols, Critical Habitability, Work Related Issues, Crew Incapacitation and Personal Choice. Second, an interaction model was developed that would show some degree of cause and effect between objective environmental or operational conditions and the creation of potential safety hazards. The intermediary steps between these two extremes of causality were the effects on human performance and the results of degraded performance. The model contains three milestones: stressor, human performance (degraded) and safety hazard threshold. Between these milestones are two countermeasure intervention points. The first opportunity for intervention is the countermeasure against stress. If this countermeasure fails, performance degrades. The second opportunity for intervention is the countermeasure against error. If this second countermeasure fails, the threshold of a potential safety hazard may be crossed.
Multi-modal two-step floating catchment area analysis of primary health care accessibility.
Langford, Mitchel; Higgs, Gary; Fry, Richard
2016-03-01
Two-step floating catchment area (2SFCA) techniques are popular for measuring potential geographical accessibility to health care services. This paper proposes methodological enhancements to increase the sophistication of the 2SFCA methodology by incorporating both public and private transport modes using dedicated network datasets. The proposed model yields separate accessibility scores for each modal group at each demand point to better reflect the differential accessibility levels experienced by each cohort. An empirical study of primary health care facilities in South Wales, UK, is used to illustrate the approach. Outcomes suggest the bus-riding cohort of each census tract experience much lower accessibility levels than those estimated by an undifferentiated (car-only) model. Car drivers' accessibility may also be misrepresented in an undifferentiated model because they potentially profit from the lower demand placed upon service provision points by bus riders. The ability to specify independent catchment sizes for each cohort in the multi-modal model allows aspects of preparedness to travel to be investigated. Copyright © 2016. Published by Elsevier Ltd.
Quantum chemical modeling of enzymatic reactions: the case of 4-oxalocrotonate tautomerase.
Sevastik, Robin; Himo, Fahmi
2007-12-01
The reaction mechanism of 4-oxalocrotonate tautomerase (4-OT) is studied using the density functional theory method B3LYP. This enzyme catalyzes the isomerisation of unconjugated alpha-keto acids to their conjugated isomers. Two different quantum chemical models of the active site are devised and the potential energy curves for the reaction are computed. The calculations support the proposed reaction mechanism in which Pro-1 acts as a base to shuttle a proton from the C3 to the C5 position of the substrate. The first step (proton transfer from C3 to proline) is shown to be the rate-limiting step. The energy of the charge-separated intermediate (protonated proline-deprotonated substrate) is calculated to be quite low, in accordance with measured pKa values. The results of the two models are used to evaluate the methodology employed in modeling enzyme active sites using quantum chemical cluster models.
Phase-field crystal modeling of heteroepitaxy and exotic modes of crystal nucleation
NASA Astrophysics Data System (ADS)
Podmaniczky, Frigyes; Tóth, Gyula I.; Tegze, György; Pusztai, Tamás; Gránásy, László
2017-01-01
We review recent advances made in modeling heteroepitaxy, two-step nucleation, and nucleation at the growth front within the framework of a simple dynamical density functional theory, the Phase-Field Crystal (PFC) model. The crystalline substrate is represented by spatially confined periodic potentials. We investigate the misfit dependence of the critical thickness in the StranskiKrastanov growth mode in isothermal studies. Apparently, the simulation results for stress release via the misfit dislocations fit better to the PeopleBean model than to the one by Matthews and Blakeslee. Next, we investigate structural aspects of two-step crystal nucleation at high undercoolings, where an amorphous precursor forms in the first stage. Finally, we present results for the formation of new grains at the solid-liquid interface at high supersaturations/supercoolings, a phenomenon termed Growth Front Nucleation (GFN). Results obtained with diffusive dynamics (applicable to colloids) and with a hydrodynamic extension of the PFC theory (HPFC, developed for simple liquids) will be compared. The HPFC simulations indicate two possible mechanisms for GFN.
a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
He, H.; Khoshelham, K.; Fraser, C.
2017-09-01
Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.
Morrissey, Karyn; Kinderman, Peter; Pontin, Eleanor; Tai, Sara; Schwannauer, Mathias
2016-08-01
In June 2011 the BBC Lab UK carried out a web-based survey on the causes of mental distress. The 'Stress Test' was launched on 'All in the Mind' a BBC Radio 4 programme and the test's URL was publicised on radio and TV broadcasts, and made available via BBC web pages and social media. Given the large amount of data created, over 32,800 participants, with corresponding diagnosis, demographic and socioeconomic characteristics; the dataset are potentially an important source of data for population based research on depression and anxiety. However, as respondents self-selected to participate in the online survey, the survey may comprise a non-random sample. It may be only individuals that listen to BBC Radio 4 and/or use their website that participated in the survey. In this instance using the Stress Test data for wider population based research may create sample selection bias. Focusing on the depression component of the Stress Test, this paper presents an easy-to-use method, the Two Step Probit Selection Model, to detect and statistically correct selection bias in the Stress Test. Using a Two Step Probit Selection Model; this paper did not find a statistically significant selection on unobserved factors for participants of the Stress Test. That is, survey participants who accessed and completed an online survey are not systematically different from non-participants on the variables of substantive interest. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mao, Y.; Crow, W. T.; Nijssen, B.
2017-12-01
Soil moisture (SM) plays an important role in runoff generation both by partitioning infiltration and surface runoff during rainfall events and by controlling the rate of subsurface flow during inter-storm periods. Therefore, more accurate SM state estimation in hydrologic models is potentially beneficial for streamflow prediction. Various previous studies have explored the potential of assimilating SM data into hydrologic models for streamflow improvement. These studies have drawn inconsistent conclusions, ranging from significantly improved runoff via SM data assimilation (DA) to limited or degraded runoff. These studies commonly treat the whole assimilation procedure as a black box without separating the contribution of each step in the procedure, making it difficult to attribute the underlying causes of runoff improvement (or the lack thereof). In this study, we decompose the overall DA process into three steps by answering the following questions (3-step framework): 1) how much can assimilation of surface SM measurements improve surface SM state in a hydrologic model? 2) how much does surface SM improvement propagate to deeper layers? 3) How much does (surface and deeper-layer) SM improvement propagate into runoff improvement? A synthetic twin experiment is carried out in the Arkansas-Red River basin ( 600,000 km2) where a synthetic "truth" run, an open-loop run (without DA) and a DA run (where synthetic surface SM measurements are assimilated) are generated. All model runs are performed at 1/8 degree resolution and over a 10-year period using the Variable Infiltration Capacity (VIC) hydrologic model at a 3-hourly time step. For the DA run, the ensemble Kalman filter (EnKF) method is applied. The updated surface and deeper-layer SM states with DA are compared to the open-loop SM to quantitatively evaluate the first two steps in the framework. To quantify the third step, a set of perfect-state runs are generated where the "true" SM states are directly inserted in the model to assess the maximum possible runoff improvement that can be achieved by improving SM states alone. Our results show that the 3-step framework is able to effectively identify the potential as well as bottleneck of runoff improvement and point out the cases where runoff improvement via assimilation of surface SM is prone to failure.
A unified classification model for modeling of seismic liquefaction potential of soil based on CPT
Samui, Pijush; Hariharan, R.
2014-01-01
The evaluation of liquefaction potential of soil due to an earthquake is an important step in geosciences. This article examines the capability of Minimax Probability Machine (MPM) for the prediction of seismic liquefaction potential of soil based on the Cone Penetration Test (CPT) data. The dataset has been taken from Chi–Chi earthquake. MPM is developed based on the use of hyperplanes. It has been adopted as a classification tool. This article uses two models (MODEL I and MODEL II). MODEL I employs Cone Resistance (qc) and Cyclic Stress Ratio (CSR) as input variables. qc and Peak Ground Acceleration (PGA) have been taken as inputs for MODEL II. The developed MPM gives 100% accuracy. The results show that the developed MPM can predict liquefaction potential of soil based on qc and PGA. PMID:26199749
A unified classification model for modeling of seismic liquefaction potential of soil based on CPT.
Samui, Pijush; Hariharan, R
2015-07-01
The evaluation of liquefaction potential of soil due to an earthquake is an important step in geosciences. This article examines the capability of Minimax Probability Machine (MPM) for the prediction of seismic liquefaction potential of soil based on the Cone Penetration Test (CPT) data. The dataset has been taken from Chi-Chi earthquake. MPM is developed based on the use of hyperplanes. It has been adopted as a classification tool. This article uses two models (MODEL I and MODEL II). MODEL I employs Cone Resistance (q c) and Cyclic Stress Ratio (CSR) as input variables. q c and Peak Ground Acceleration (PGA) have been taken as inputs for MODEL II. The developed MPM gives 100% accuracy. The results show that the developed MPM can predict liquefaction potential of soil based on q c and PGA.
Interaction of tetraethoxysilane with OH-terminated SiO2 (0 0 1) surface: A first principles study
NASA Astrophysics Data System (ADS)
Deng, Xiaodi; Song, Yixu; Li, Jinchun; Pu, Yikang
2014-06-01
First principles calculates have been performed to investigate the surface reaction mechanism of tetraethoxysilane (TEOS) with fully hydroxylated SiO2(0 0 1) substrate. In semiconductor industry, this is the key step to understand and control the SiO2 film growth in chemical vapor deposition (CVD) and atomic layer deposition (ALD) processes. During the calculation, we proposed a model which breaks the surface dissociative chemisorption into two steps and we calculated the activation barriers and thermochemical energies for each step. Our calculation result for step one shows that the first half reaction is thermodynamically favorable. For the second half reaction, we systematically studied the two potential reaction pathways. The comparing result indicates that the pathway which is more energetically favorable will lead to formation of crystalline SiO2 films while the other will lead to formation of disordered SiO2 films.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2016-12-09
Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.
Automatic stage identification of Drosophila egg chamber based on DAPI images
Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min
2016-01-01
The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176
Study on launch scheme of space-net capturing system.
Gao, Qingyu; Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme.
Study on launch scheme of space-net capturing system
Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang
2017-01-01
With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme. PMID:28877187
Modifications to WRFs dynamical core to improve the treatment of moisture for large-eddy simulations
Xiao, Heng; Endo, Satoshi; Wong, May; ...
2015-10-29
Yamaguchi and Feingold (2012) note that the cloud fields in their large-eddy simulations (LESs) of marine stratocumulus using the Weather Research and Forecasting (WRF) model exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in themore » acoustic sub-stepping portion of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic sub-steps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic sub-steps) are eliminated in both of the example stratocumulus cases. In conclusion, this modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
Thermal modeling of step-out targets at the Soda Lake geothermal field, Churchill County, Nevada
NASA Astrophysics Data System (ADS)
Dingwall, Ryan Kenneth
Temperature data at the Soda Lake geothermal field in the southeastern Carson Sink, Nevada, highlight an intense thermal anomaly. The geothermal field produces roughly 11 MWe from two power producing facilities which are rated to 23 MWe. The low output is attributed to the inability to locate and produce sufficient volumes of fluid at adequate temperature. Additionally, the current producing area has experienced declining production temperatures over its 40 year history. Two step-out targets adjacent to the main field have been identified that have the potential to increase production and extend the life of the field. Though shallow temperatures in the two subsidiary areas are significantly less than those found within the main anomaly, measurements in deeper wells (>1,000 m) show that temperatures viable for utilization are present. High-pass filtering of the available complete Bouguer gravity data indicates that geothermal flow is present within the shallow sediments of the two subsidiary areas. Significant faulting is observed in the seismic data in both of the subsidiary areas. These structures are highlighted in the seismic similarity attribute calculated as part of this study. One possible conceptual model for the geothermal system(s) at the step-out targets indicated upflow along these faults from depth. In order to test this hypothesis, three-dimensional computer models were constructed in order to observe the temperatures that would result from geothermal flow along the observed fault planes. Results indicate that the observed faults are viable hosts for the geothermal system(s) in the step-out areas. Subsequently, these faults are proposed as targets for future exploration focus and step-out drilling.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-11-25
Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. On five of the seven models, the inter-class correlations coefficient (ICC (3,1) ) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1) : 0.992-0.998. The false detection rates differed between the sitting conditions. These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation.
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-12-01
The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.
Combining medically assisted treatment and Twelve-Step programming: a perspective and review.
Galanter, Marc
2018-01-01
People with severe substance use disorders require long-term rehabilitative care after the initial treatment. There is, however, a deficit in the availability of such care. This may be due both to inadequate medical coverage and insufficient use of community-based Twelve-Step programs in many treatment facilities. In order to address this deficit, rehabilitative care for severe substance use disorders could be promoted through collaboration between practitioners of medically assisted treatment, employing medications, and Twelve-Step-oriented practitioners. To describe the limitations and benefits in applying biomedical approaches and Twelve-Step resources in the rehabilitation of persons with severe substance use disorders; and to assess how the two approaches can be employed together to improve clinical outcome. Empirical literature focusing on clinical and manpower issues is reviewed with regard (a) to limitations in available treatment options in ambulatory and residential addiction treatment facilities for persons with severe substance use disorders, (b) problems of long-term rehabilitation particular to opioid-dependent persons, associated with the limitations of pharmacologic approaches, (c) the relative effectiveness of biomedical and Twelve-Step approaches in the clinical context, and (d) the potential for enhanced use of these approaches, singly and in combination, to address perceived deficits. The biomedical and Twelve-Step-oriented approaches are based on differing theoretical and empirically grounded models. Research-based opportunities are reviewed for improving addiction rehabilitation resources with enhanced collaboration between practitioners of these two potentially complementary practice models. This can involve medications for both acute and chronic treatment for substances for which such medications are available, and Twelve-Step-based support for abstinence and long-term rehabilitation. Clinical and Scientific Significance: Criteria for developing evidence-based approaches for combined treatment should be developed, and research for evidence-based treatment on this basis can be undertaken in order to develop improved clinical outcome.
Arai, Noriyoshi; Yasuoka, Kenji; Koishi, Takahiro; Ebisuzaki, Toshikazu; Zeng, Xiao Cheng
2013-06-12
The "asymmetric Brownian ratchet model", a variation of Feynman's ratchet and pawl system, is invoked to understand the kinesin walking behavior along a microtubule. The model system, consisting of a motor and a rail, can exhibit two distinct binding states, namely, the random Brownian state and the asymmetric potential state. When the system is transformed back and forth between the two states, the motor can be driven to "walk" in one direction. Previously, we suggested a fundamental mechanism, that is, bubble formation in a nanosized channel surrounded by hydrophobic atoms, to explain the transition between the two states. In this study, we propose a more realistic and viable switching method in our computer simulation of molecular motor walking. Specifically, we propose a thermosensitive polymer model with which the transition between the two states can be controlled by temperature pulses. Based on this new motor system, the stepping size and stepping time of the motor can be recorded. Remarkably, the "walking" behavior observed in the newly proposed model resembles that of the realistic motor protein. The bubble formation based motor not only can be highly efficient but also offers new insights into the physical mechanism of realistic biomolecule motors.
Steady-State Density Functional Theory for Finite Bias Conductances.
Stefanucci, G; Kurth, S
2015-12-09
In the framework of density functional theory, a formalism to describe electronic transport in the steady state is proposed which uses the density on the junction and the steady current as basic variables. We prove that, in a finite window around zero bias, there is a one-to-one map between the basic variables and both local potential on as well as bias across the junction. The resulting Kohn-Sham system features two exchange-correlation (xc) potentials, a local xc potential, and an xc contribution to the bias. For weakly coupled junctions the xc potentials exhibit steps in the density-current plane which are shown to be crucial to describe the Coulomb blockade diamonds. At small currents these steps emerge as the equilibrium xc discontinuity bifurcates. The formalism is applied to a model benzene junction, finding perfect agreement with the orthodox theory of Coulomb blockade.
Force sum rules for stepped surfaces of jellium
NASA Astrophysics Data System (ADS)
Farjam, Mani
2007-03-01
The Budd-Vannimenus theorem for jellium surface is generalized for stepped surfaces of jellium. Our sum rules show that the average value of the electrostatic potential over the stepped jellium surface equals the value of the potential at the corresponding flat jellium surface. Several sum rules are tested with numerical results obtained within the Thomas-Fermi model of stepped surfaces.
Gill, T; Barua, N U; Woolley, M; Bienemann, A S; Johnson, D E; S O'Sullivan; Murray, G; Fennelly, C; Lewis, O; Irving, C; Wyatt, M J; Moore, P; Gill, S S
2013-09-30
The optimisation of convection-enhanced drug delivery (CED) to the brain is fundamentally reliant on minimising drug reflux. The aim of this study was to evaluate the performance of a novel reflux-resistant CED catheter incorporating a recessed-step and to compare its performance to previously described stepped catheters. The in vitro performance of the recessed-step catheter was compared to a conventional "one-step" catheter with a single transition in outer diameter (OD) at the catheter tip, and a "two-step" design comprising two distal transitions in OD. The volumes of distribution and reflux were compared by performing infusions of Trypan blue into agarose gels. The in vivo performance of the recessed-step catheter was then analysed in a large animal model by performing infusions of 0.2% Gadolinium-DTPA in Large White/Landrace pigs. The recessed-step catheter demonstrated significantly higher volumes of distribution than the one-step and two-step catheters (p=0.0001, one-way ANOVA). No reflux was detected until more than 100 ul had been delivered via the recessed-step catheter, whilst reflux was detected after infusion of only 25 ul via the 2 non-recessed catheters. The recessed-step design also showed superior reflux resistance to a conventational one-step catheter in vivo. Reflux-free infusions were achieved in the thalamus, putamen and white matter at a maximum infusion rate of 5 ul/min using the recessed-step design. The novel recessed-step catheter described in this study shows significant potential for the achievement of predictable high volume, high flow rate infusions whilst minimising the risk of reflux. Copyright © 2013 Elsevier B.V. All rights reserved.
How Different kinds of Communication and the Mass Media Affect Tourism.
1984-12-01
C. Criticism of the Two-Step Flow Model------------------------------ 38 3. The Multi-Step Flow Model or Theory------- 39 4. One-Step Flow Model or... Criticism of the Two-Step Flow Model Researchers have identified deficiencies in the r,,c-w:er :low-model. McNelly, for instance, se,7s mass...evidence of the relative importance of cc munication on the diffusion flow. 38 Rogers ’as criticized ] the theor-y on the grounds that: neither its
Kwiecień, Renata A; Molinié, Roland; Paneth, Piotr; Silvestre, Virginie; Lebreton, Jacques; Robins, Richard J
2011-06-01
(15)N heavy isotope effects are especially useful when detail is sought pertaining to the reaction mechanism for the cleavage of a C-N bond. Their potential in assisting to describe the mechanism of N-demethylation of tertiary amines by the action of cytochrome P450 monooxygenase has been investigated. As a working model for the first step, oxidation of the N-methyl group to N-methoxyl, tropine and a cytochrome P450 monooxygenase reaction centre composed of a truncated heme with sulfhydryl as the axial ligand were used. It is apparent that this first step of the reaction proceeds via a hydrogen atom transfer mechanism. Transition states for this step are described for both the high spin ((4)TS(H)) and low spin ((2)TS(H)) pathways in both gas and solvation states. Hence, overall normal secondary (15)N KIE could be calculated for the reaction path modeled in the low spin state, and inverse for the reaction modeled in the high spin state. This partial reaction has been identified as the probable rate limiting step. The model for the second step, fission of the C-N bond, consisted of N-methoxylnortropine and two molecules of water. A transition state described for this step, TS(CN), gives a strongly inverse overall theoretical (15)N KIE. Copyright © 2011 Elsevier Inc. All rights reserved.
Angelopoulou, A; Efthimiadou, E K; Boukos, N; Kordas, G
2014-05-01
In this work, hybrid microspheres were prepared in a two-step process combining the emulsifier free-emulsion polymerization and the sol-gel coating method. In the first step, polystyrene (St) and poly(methyl methacrylate) (PMMA) microspheres were prepared as sacrificial template and in the second step a silanol shell was fabricated. The functionalized surface of the hybrid microspheres by silane analogs (APTES, TEOS) resulted in enhanced effects. The hollow microspheres were resulted either in an additional step by template dissolution and/or during the coating process. The microspheres' surface interactions and the size distribution were optimized by treatment in simulated body fluids, which resulted in the in vitro prediction of bioactivity. The bioassay test indicated that the induced hydroxyapatite resembled in structure to naturally occurring bone apatite. The drug doxorubicin (DOX) was used as a model entity for the evaluation of drug loading and release. The drug release study was performed in two different pH conditions, at acidic (pH=4.5) close to cancer cell environment and at slightly basic pH (pH=7.4) resembling the orthopedic environment. The results of the present study indicated promising hybrid microspheres for the potential application as drug delivery vehicles, for dual orthopedic functionalities in bone defects, bone inflammation, bone cancer and bone repair. Copyright © 2014 Elsevier B.V. All rights reserved.
Spin-density functional theory treatment of He+-He collisions
NASA Astrophysics Data System (ADS)
Baxter, Matthew; Kirchner, Tom; Engel, Eberhard
2016-09-01
The He+-He collision system presents an interesting challenge to theory. On one hand, a full treatment of the three-electron dynamics constitutes a massive computational problem that has not been attempted yet; on the other hand, simplified independent-particle-model based descriptions may only provide partial information on either the transitions of the initial target electrons or on the transitions of the projectile electron, depending on the choice of atomic model potentials. We address the He+-He system within the spin-density functional theory framework on the exchange-only level. The Krieger-Li-Iafrate (KLI) approximation is used to calculate the exchange potentials for the spin-up and spin-down electrons, which ensures the correct asymptotic behavior of the effective (Kohn-Sham) potential consisting of exchange, Hartree and nuclear Coulomb potentials. The orbitals are propagated with the two-center basis generator method. In each time step, simplified versions of them are fed into the KLI equations to calculate the Kohn-Sham potential, which, in turn, is used to generate the orbitals in the next time step. First results for the transitions of all electrons and the resulting charge-changing total cross sections will be presented at the conference. Work supported by NSERC, Canada.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cermelli, Paolo; Jabbour, Michel E.; Department of Mathematics, University of Kentucky, Lexington, Kentucky 40506-0027
A thermodynamically consistent continuum theory for single-species, step-flow epitaxy that extends the classical Burton-Cabrera-Frank (BCF) framework is derived from basic considerations. In particular, an expression for the step chemical potential is obtained that contains two energetic contributions--one from the adjacent terraces in the form of the jump in the adatom grand canonical potential and the other from the monolayer of crystallized adatoms that underlies the upper terrace in the form of the nominal bulk chemical potential--thus generalizing the classical Gibbs-Thomson relation to the dynamic, dissipative setting of step-flow growth. The linear stability analysis of the resulting quasistatic free-boundary problem formore » an infinite train of equidistant rectilinear steps yields explicit--i.e., analytical--criteria for the onset of step bunching in terms of the basic physical and geometric parameters of the theory. It is found that, in contrast with the predictions of the classical BCF model, both in the absence as well as in the presence of desorption, a growth regime exists for which step bunching occurs, except possibly in the dilute limit where the train is always stable to step bunching. In the present framework, the onset of one-dimensional instabilities is directly attributed to the energetic influence on the migrating steps of the adjacent terraces. Hence the theory provides a ''minimalist'' alternative to existing theories of step bunching and should be relevant to, e.g., molecular beam epitaxy of GaAs where the equilibrium adatom density is shown by Tersoff, Johnson, and Orr [Phys. Rev. B 78, 282 (1997)] to be extremely high.« less
Simón, Luis
2018-03-28
Qualitative reaction models or predicting guides are a very useful outcome of theoretical investigations of organocatalytic reaction mechanism that allow forecasting of the degree and sense of the enantioselectivity of reactions involving novel substrates. However, application of these models can be unexpectedly challenging in reactions affected by a large number of conformations and potential control of the enantioselectivity by different reaction steps. The QM/MM study of the Friedel-Crafts reaction between indole and the N-tosylimide of benzaldehyde catalysed by different CPA reveals that the reaction consists of two CPA-assisted steps: the addition of the two reagents to yield a Wheland intermediate, and its re-aromatization. The relevance of the second step depends on the catalyst: it changes the sense of the expected stereoselectivity for a BINOP-derived CPA but is irrelevant in the reaction catalysed by a VAPOL-derived imidodiphosphoric acid catalyst. Although the relative energies of the TSs can be rationalized considering the steric interactions with the catalyst, the possibility of additional H-bonds, or the relative stability of the conformation of the reagents, predicting the enantioselectivity is not possible using qualitative guides.
Han, Yaohui; Mou, Lan; Xu, Gengchi; Yang, Yiqiang; Ge, Zhenlin
2015-03-01
To construct a three-dimensional finite element model comparing between one-step and two-step methods in torque control of anterior teeth during space closure. Dicom image data including maxilla and upper teeth were obtained though cone-beam CT. A three-dimensional model was set up and the maxilla, upper teeth and periodontium were separated using Mimics software. The models were instantiated using Pro/Engineer software, and Abaqus finite element analysis software was used to simulate the sliding mechanics by loading 1.47 Nforce on traction hooks with different heights (2, 4, 6, 8, 10, 12 and 14 mm, respectively) in order to compare the initial displacement between six maxillary anterior teeth (one-step method) and four maxillary anterior teeth (two-step method). When moving anterior teeth bodily, initial displacements of central incisors in two-step method and in one-step method were 29.26 × 10⁻⁶ mm and 15.75 × 10⁻⁶ mm, respectively. The initial displacements of lateral incisors in two-step method and in one-step method were 46.76 × 10(-6) mm and 23.18 × 10(-6) mm, respectively. Under the same amount of light force, the initial displacement of anterior teeth in two-step method was doubled compared with that in one-step method. The root and crown of the canine couldn't obtain the same amount of displacement in one-step method. Two-step method could produce more initial displacement than one-step method. Therefore, two-step method was easier to achieve torque control of the anterior teeth during space closure.
Tank Tests of Models of Flying Boat Hulls Having Longitudinal Steps
NASA Technical Reports Server (NTRS)
Allison, John M; Ward, Kenneth E
1936-01-01
Four models with longitudinal steps on the forebody were developed by modification of a model of a conventional hull and were tested in the National Advisory Committee for Aeronautics (NACA) tank. Models with longitudinal steps were found to have smaller resistance at high speed and greater resistance at low speed than the parent model that had the same afterbody but a conventional V-section forebody. The models with a single longitudinal step had better performance at hump speed and as low high-speed resistance except at very light loads. Spray strips at angles from 0 degrees to 45 degrees to the horizontal were fitted at the longitudinal steps and at the chine on one of the two step models having two longitudinal steps. The resistance and the height of the spray were less with each of the spray strips than without; the most favorable angle was found to lie between 15 degrees and 30 degrees.
A comparison of simple global kinetic models for coal devolatilization with the CPD model
Richards, Andrew P.; Fletcher, Thomas H.
2016-08-01
Simulations of coal combustors and gasifiers generally cannot incorporate the complexities of advanced pyrolysis models, and hence there is interest in evaluating simpler models over ranges of temperature and heating rate that are applicable to the furnace of interest. In this paper, six different simple model forms are compared to predictions made by the Chemical Percolation Devolatilization (CPD) model. The model forms included three modified one-step models, a simple two-step model, and two new modified two-step models. These simple model forms were compared over a wide range of heating rates (5 × 10 3 to 10 6 K/s) at finalmore » temperatures up to 1600 K. Comparisons were made of total volatiles yield as a function of temperature, as well as the ultimate volatiles yield. Advantages and disadvantages for each simple model form are discussed. In conclusion, a modified two-step model with distributed activation energies seems to give the best agreement with CPD model predictions (with the fewest tunable parameters).« less
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-01-01
Objectives: Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. Methods: To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. Results: On five of the seven models, the inter-class correlations coefficient (ICC (3,1)) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1): 0.992-0.998. The false detection rates differed between the sitting conditions. Conclusions: These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation. PMID:28835575
Kinesin Steps Do Not Alternate in Size☆
Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.
2008-01-01
Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906
Chen, Chi-Kan
2017-07-26
The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.
NASA Technical Reports Server (NTRS)
Schindler, K.; Birn, J.; Hesse, M.
2012-01-01
Localized plasma structures, such as thin current sheets, generally are associated with localized magnetic and electric fields. In space plasmas localized electric fields not only play an important role for particle dynamics and acceleration but may also have significant consequences on larger scales, e.g., through magnetic reconnection. Also, it has been suggested that localized electric fields generated in the magnetosphere are directly connected with quasi-steady auroral arcs. In this context, we present a two-dimensional model based on Vlasov theory that provides the electric potential for a large class of given magnetic field profiles. The model uses an expansion for small deviation from gyrotropy and besides quasineutrality it assumes that electrons and ions have the same number of particles with their generalized gyrocenter on any given magnetic field line. Specializing to one dimension, a detailed discussion concentrates on the electric potential shapes (such as "U" or "S" shapes) associated with magnetic dips, bumps, and steps. Then, it is investigated how the model responds to quasi-steady evolution of the plasma. Finally, the model proves useful in the interpretation of the electric potentials taken from two existing particle simulations.
Applications of step-selection functions in ecology and conservation.
Thurfjell, Henrik; Ciuti, Simone; Boyce, Mark S
2014-01-01
Recent progress in positioning technology facilitates the collection of massive amounts of sequential spatial data on animals. This has led to new opportunities and challenges when investigating animal movement behaviour and habitat selection. Tools like Step Selection Functions (SSFs) are relatively new powerful models for studying resource selection by animals moving through the landscape. SSFs compare environmental attributes of observed steps (the linear segment between two consecutive observations of position) with alternative random steps taken from the same starting point. SSFs have been used to study habitat selection, human-wildlife interactions, movement corridors, and dispersal behaviours in animals. SSFs also have the potential to depict resource selection at multiple spatial and temporal scales. There are several aspects of SSFs where consensus has not yet been reached such as how to analyse the data, when to consider habitat covariates along linear paths between observations rather than at their endpoints, how many random steps should be considered to measure availability, and how to account for individual variation. In this review we aim to address all these issues, as well as to highlight weak features of this modelling approach that should be developed by further research. Finally, we suggest that SSFs could be integrated with state-space models to classify behavioural states when estimating SSFs.
Secretory immunoglobulin purification from whey by chromatographic techniques.
Matlschweiger, Alexander; Engelmaier, Hannah; Himmler, Gottfried; Hahn, Rainer
2017-08-15
Secretory immunoglobulins (SIg) are a major fraction of the mucosal immune system and represent potential drug candidates. So far, platform technologies for their purification do not exist. SIg from animal whey was used as a model to develop a simple, efficient and potentially generic chromatographic purification process. Several chromatographic stationary phases were tested. A combination of two anion-exchange steps resulted in the highest purity. The key step was the use of a small-porous anion exchanger operated in flow-through mode. Diffusion of SIg into the resin particles was significantly hindered, while the main impurities, IgG and serum albumin, were bound. In this step, initial purity was increased from 66% to 89% with a step yield of 88%. In a second anion-exchange step using giga-porous material, SIg was captured and purified by step or linear gradient elution to obtain fractions with purities >95%. For the step gradient elution step yield of highly pure SIg was 54%. Elution of SIgA and SIgM with a linear gradient resulted in a step yield of 56% and 35%, respectively. Overall yields for both anion exchange steps were 43% for the combination of flow-through and step elution mode. Combination of flow-through and linear gradient elution mode resulted in a yield of 44% for SIgA and 39% for SIgM. The proposed process allows the purification of biologically active SIg from animal whey in preparative scale. For future applications, the process can easily be adopted for purification of recombinant secretory immunoglobulin species. Copyright © 2017 Elsevier B.V. All rights reserved.
Jończyk, Jakub; Malawska, Barbara; Bajda, Marek
2017-01-01
The crucial role of G-protein coupled receptors and the significant achievements associated with a better understanding of the spatial structure of known receptors in this family encouraged us to undertake a study on the histamine H3 receptor, whose crystal structure is still unresolved. The latest literature data and availability of different software enabled us to build homology models of higher accuracy than previously published ones. The new models are expected to be closer to crystal structures; and therefore, they are much more helpful in the design of potential ligands. In this article, we describe the generation of homology models with the use of diverse tools and a hybrid assessment. Our study incorporates a hybrid assessment connecting knowledge-based scoring algorithms with a two-step ligand-based docking procedure. Knowledge-based scoring employs probability theory for global energy minimum determination based on information about native amino acid conformation from a dataset of experimentally determined protein structures. For a two-step docking procedure two programs were applied: GOLD was used in the first step and Glide in the second. Hybrid approaches offer advantages by combining various theoretical methods in one modeling algorithm. The biggest advantage of hybrid methods is their intrinsic ability to self-update and self-refine when additional structural data are acquired. Moreover, the diversity of computational methods and structural data used in hybrid approaches for structure prediction limit inaccuracies resulting from theoretical approximations or fuzziness of experimental data. The results of docking to the new H3 receptor model allowed us to analyze ligand-receptor interactions for reference compounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Recreation conflict potential and management in the northern/central Black Forest Nature Park
C. Mann; J. D. Absher
2008-01-01
This study explores conflict in recreational use of the Black Forest Nature Park (BFNP) by six different nature sports groups as a function of infrastructure, forest management and other users. A multi-step, methodological triangulation conflict model from US recreation management was applied and tested in the Park. Results from two groups, hikers and mountain bikers,...
Two-step evolution of endosymbiosis between hydra and algae.
Ishikawa, Masakazu; Shimizu, Hiroshi; Nozawa, Masafumi; Ikeo, Kazuho; Gojobori, Takashi
2016-10-01
In the Hydra vulgaris group, only 2 of the 25 strains in the collection of the National Institute of Genetics in Japan currently show endosymbiosis with green algae. However, whether the other non-symbiotic strains also have the potential to harbor algae remains unknown. The endosymbiotic potential of non-symbiotic strains that can harbor algae may have been acquired before or during divergence of the strains. With the aim of understanding the evolutionary process of endosymbiosis in the H. vulgaris group, we examined the endosymbiotic potential of non-symbiotic strains of the H. vulgaris group by artificially introducing endosymbiotic algae. We found that 12 of the 23 non-symbiotic strains were able to harbor the algae until reaching the grand-offspring through the asexual reproduction by budding. Moreover, a phylogenetic analysis of mitochondrial genome sequences showed that all the strains with endosymbiotic potential grouped into a single cluster (cluster γ). This cluster contained two strains (J7 and J10) that currently harbor algae; however, these strains were not the closest relatives. These results suggest that evolution of endosymbiosis occurred in two steps; first, endosymbiotic potential was gained once in the ancestor of the cluster γ lineage; second, strains J7 and J10 obtained algae independently after the divergence of the strains. By demonstrating the evolution of the endosymbiotic potential in non-symbiotic H. vulgaris group strains, we have clearly distinguished two evolutionary steps. The step-by-step evolutionary process provides significant insight into the evolution of endosymbiosis in cnidarians. Copyright © 2016 Elsevier Inc. All rights reserved.
Van Holsbeke, C; Ameye, L; Testa, A C; Mascilini, F; Lindqvist, P; Fischerova, D; Frühauf, F; Fransis, S; de Jonge, E; Timmerman, D; Epstein, E
2014-05-01
To develop and validate strategies, using new ultrasound-based mathematical models, for the prediction of high-risk endometrial cancer and compare them with strategies using previously developed models or the use of preoperative grading only. Women with endometrial cancer were prospectively examined using two-dimensional (2D) and three-dimensional (3D) gray-scale and color Doppler ultrasound imaging. More than 25 ultrasound, demographic and histological variables were analyzed. Two logistic regression models were developed: one 'objective' model using mainly objective variables; and one 'subjective' model including subjective variables (i.e. subjective impression of myometrial and cervical invasion, preoperative grade and demographic variables). The following strategies were validated: a one-step strategy using only preoperative grading and two-step strategies using preoperative grading as the first step and one of the new models, subjective assessment or previously developed models as a second step. One-hundred and twenty-five patients were included in the development set and 211 were included in the validation set. The 'objective' model retained preoperative grade and minimal tumor-free myometrium as variables. The 'subjective' model retained preoperative grade and subjective assessment of myometrial invasion. On external validation, the performance of the new models was similar to that on the development set. Sensitivity for the two-step strategy with the 'objective' model was 78% (95% CI, 69-84%) at a cut-off of 0.50, 82% (95% CI, 74-88%) for the strategy with the 'subjective' model and 83% (95% CI, 75-88%) for that with subjective assessment. Specificity was 68% (95% CI, 58-77%), 72% (95% CI, 62-80%) and 71% (95% CI, 61-79%) respectively. The two-step strategies detected up to twice as many high-risk cases as preoperative grading only. The new models had a significantly higher sensitivity than did previously developed models, at the same specificity. Two-step strategies with 'new' ultrasound-based models predict high-risk endometrial cancers with good accuracy and do this better than do previously developed models. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
A hybrid-perturbation-Galerkin technique which combines multiple expansions
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1989-01-01
A two-step hybrid perturbation-Galerkin method for the solution of a variety of differential equations type problems is found to give better results when multiple perturbation expansions are employed. The method assumes that there is parameter in the problem formulation and that a perturbation method can be sued to construct one or more expansions in this perturbation coefficient functions multiplied by computed amplitudes. In step one, regular and/or singular perturbation methods are used to determine the perturbation coefficient functions. The results of step one are in the form of one or more expansions each expressed as a sum of perturbation coefficient functions multiplied by a priori known gauge functions. In step two the classical Bubnov-Galerkin method uses the perturbation coefficient functions computed in step one to determine a set of amplitudes which replace and improve upon the gauge functions. The hybrid method has the potential of overcoming some of the drawbacks of the perturbation and Galerkin methods as applied separately, while combining some of their better features. The proposed method is applied, with two perturbation expansions in each case, to a variety of model ordinary differential equations problems including: a family of linear two-boundary-value problems, a nonlinear two-point boundary-value problem, a quantum mechanical eigenvalue problem and a nonlinear free oscillation problem. The results obtained from the hybrid methods are compared with approximate solutions obtained by other methods, and the applicability of the hybrid method to broader problem areas is discussed.
The quantum dynamics of electronically nonadiabatic chemical reactions
NASA Technical Reports Server (NTRS)
Truhlar, Donald G.
1993-01-01
Considerable progress was achieved on the quantum mechanical treatment of electronically nonadiabatic collisions involving energy transfer and chemical reaction in the collision of an electronically excited atom with a molecule. In the first step, a new diabatic representation for the coupled potential energy surfaces was created. A two-state diabatic representation was developed which was designed to realistically reproduce the two lowest adiabatic states of the valence bond model and also to have the following three desirable features: (1) it is more economical to evaluate; (2) it is more portable; and (3) all spline fits are replaced by analytic functions. The new representation consists of a set of two coupled diabatic potential energy surfaces plus a coupling surface. It is suitable for dynamics calculations on both the electronic quenching and reaction processes in collisions of Na(3p2p) with H2. The new two-state representation was obtained by a three-step process from a modified eight-state diatomics-in-molecules (DIM) representation of Blais. The second step required the development of new dynamical methods. A formalism was developed for treating reactions with very general basis functions including electronically excited states. Our formalism is based on the generalized Newton, scattered wave, and outgoing wave variational principles that were used previously for reactive collisions on a single potential energy surface, and it incorporates three new features: (1) the basis functions include electronic degrees of freedom, as required to treat reactions involving electronic excitation and two or more coupled potential energy surfaces; (2) the primitive electronic basis is assumed to be diabatic, and it is not assumed that it diagonalizes the electronic Hamiltonian even asymptotically; and (3) contracted basis functions for vibrational-rotational-orbital degrees of freedom are included in a very general way, similar to previous prescriptions for locally adiabatic functions in various quantum scattering algorithms.
Moller, Peter; Ichikawa, Takatoshi
2015-12-23
In this study, we propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q 2), neck d, left nascent fragment spheroidal deformation ϵ f1, right nascent fragment deformation ϵ f2 and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method tomore » calculate this generalized potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the “compound-system” model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition.« less
Hasegawa, Chihiro; Duffull, Stephen B
2018-02-01
Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.
Model medication management process in Australian nursing homes using business process modeling.
Qian, Siyu; Yu, Ping
2013-01-01
One of the reasons for end user avoidance or rejection to use health information systems is poor alignment of the system with healthcare workflow, likely causing by system designers' lack of thorough understanding about healthcare process. Therefore, understanding the healthcare workflow is the essential first step for the design of optimal technologies that will enable care staff to complete the intended tasks faster and better. The often use of multiple or "high risk" medicines by older people in nursing homes has the potential to increase medication error rate. To facilitate the design of information systems with most potential to improve patient safety, this study aims to understand medication management process in nursing homes using business process modeling method. The paper presents study design and preliminary findings from interviewing two registered nurses, who were team leaders in two nursing homes. Although there were subtle differences in medication management between the two homes, major medication management activities were similar. Further field observation will be conducted. Based on the data collected from observations, an as-is process model for medication management will be developed.
Karakülah, G.; Dicle, O.; Sökmen, S.; Çelikoğlu, C.C.
2015-01-01
Summary Background The selection of appropriate rectal cancer treatment is a complex multi-criteria decision making process, in which clinical decision support systems might be used to assist and enrich physicians’ decision making. Objective The objective of the study was to develop a web-based clinical decision support tool for physicians in the selection of potentially beneficial treatment options for patients with rectal cancer. Methods The updated decision model contained 8 and 10 criteria in the first and second steps respectively. The decision support model, developed in our previous study by combining the Analytic Hierarchy Process (AHP) method which determines the priority of criteria and decision tree that formed using these priorities, was updated and applied to 388 patients data collected retrospectively. Later, a web-based decision support tool named corRECTreatment was developed. The compatibility of the treatment recommendations by the expert opinion and the decision support tool was examined for its consistency. Two surgeons were requested to recommend a treatment and an overall survival value for the treatment among 20 different cases that we selected and turned into a scenario among the most common and rare treatment options in the patient data set. Results In the AHP analyses of the criteria, it was found that the matrices, generated for both decision steps, were consistent (consistency ratio<0.1). Depending on the decisions of experts, the consistency value for the most frequent cases was found to be 80% for the first decision step and 100% for the second decision step. Similarly, for rare cases consistency was 50% for the first decision step and 80% for the second decision step. Conclusions The decision model and corRECTreatment, developed by applying these on real patient data, are expected to provide potential users with decision support in rectal cancer treatment processes and facilitate them in making projections about treatment options. PMID:25848413
Suner, A; Karakülah, G; Dicle, O; Sökmen, S; Çelikoğlu, C C
2015-01-01
The selection of appropriate rectal cancer treatment is a complex multi-criteria decision making process, in which clinical decision support systems might be used to assist and enrich physicians' decision making. The objective of the study was to develop a web-based clinical decision support tool for physicians in the selection of potentially beneficial treatment options for patients with rectal cancer. The updated decision model contained 8 and 10 criteria in the first and second steps respectively. The decision support model, developed in our previous study by combining the Analytic Hierarchy Process (AHP) method which determines the priority of criteria and decision tree that formed using these priorities, was updated and applied to 388 patients data collected retrospectively. Later, a web-based decision support tool named corRECTreatment was developed. The compatibility of the treatment recommendations by the expert opinion and the decision support tool was examined for its consistency. Two surgeons were requested to recommend a treatment and an overall survival value for the treatment among 20 different cases that we selected and turned into a scenario among the most common and rare treatment options in the patient data set. In the AHP analyses of the criteria, it was found that the matrices, generated for both decision steps, were consistent (consistency ratio<0.1). Depending on the decisions of experts, the consistency value for the most frequent cases was found to be 80% for the first decision step and 100% for the second decision step. Similarly, for rare cases consistency was 50% for the first decision step and 80% for the second decision step. The decision model and corRECTreatment, developed by applying these on real patient data, are expected to provide potential users with decision support in rectal cancer treatment processes and facilitate them in making projections about treatment options.
Kay-Lambkin, Frances J; Baker, Amanda L; McKetin, Rebecca; Lee, Nicole
2010-09-01
Stepped-care has been recommended in the alcohol and other drug field and adopted in a number of service settings, but few research projects have examined this approach. This article aims to describe a pilot trial of stepped-care methods in the treatment of methamphetamine use and depression comorbidity. An adaptive treatment strategy was developed based on recommendations for stepped-care among methamphetamine users, and incorporating cognitive behaviour therapy/motivational intervention for methamphetamine use and depression. The adaptive treatment strategy was compared with a fixed treatment, comprising an extended integrated cognitive behaviour therapy/motivational intervention treatment. Eighteen participants across two study sites were involved in the trial, and were current users of methamphetamines (at least once weekly) exhibiting at least moderate symptoms of depression (score of 17 or greater on the Beck Depression Inventory II). Treatment delivered via the adaptive treatment (stepped-care) model was associated with improvement in depression and methamphetamine use, however, was not associated with more efficient delivery of psychological treatment to this population relative to the comparison treatment. This pilot trial attests to the potential for adaptive treatment strategies to increase the evidence base for stepped-care approaches within the alcohol and other drug field. However, in order for stepped-care treatment in this trial to be delivered efficiently, specific training in the delivery and philosophy of the model is required.
Modeling Woven Polymer Matrix Composites with MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)
2000-01-01
NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.
Crystal step edges can trap electrons on the surfaces of n-type organic semiconductors.
He, Tao; Wu, Yanfei; D'Avino, Gabriele; Schmidt, Elliot; Stolte, Matthias; Cornil, Jérôme; Beljonne, David; Ruden, P Paul; Würthner, Frank; Frisbie, C Daniel
2018-05-30
Understanding relationships between microstructure and electrical transport is an important goal for the materials science of organic semiconductors. Combining high-resolution surface potential mapping by scanning Kelvin probe microscopy (SKPM) with systematic field effect transport measurements, we show that step edges can trap electrons on the surfaces of single crystal organic semiconductors. n-type organic semiconductor crystals exhibiting positive step edge surface potentials display threshold voltages that increase and carrier mobilities that decrease with increasing step density, characteristic of trapping, whereas crystals that do not have positive step edge surface potentials do not have strongly step density dependent transport. A device model and microelectrostatics calculations suggest that trapping can be intrinsic to step edges for crystals of molecules with polar substituents. The results provide a unique example of a specific microstructure-charge trapping relationship and highlight the utility of surface potential imaging in combination with transport measurements as a productive strategy for uncovering microscopic structure-property relationships in organic semiconductors.
Numerical modeling of surface wave development under the action of wind
NASA Astrophysics Data System (ADS)
Chalikov, Dmitry
2018-06-01
The numerical modeling of two-dimensional surface wave development under the action of wind is performed. The model is based on three-dimensional equations of potential motion with a free surface written in a surface-following nonorthogonal curvilinear coordinate system in which depth is counted from a moving surface. A three-dimensional Poisson equation for the velocity potential is solved iteratively. A Fourier transform method, a second-order accuracy approximation of vertical derivatives on a stretched vertical grid and fourth-order Runge-Kutta time stepping are used. Both the input energy to waves and dissipation of wave energy are calculated on the basis of earlier developed and validated algorithms. A one-processor version of the model for PC allows us to simulate an evolution of the wave field with thousands of degrees of freedom over thousands of wave periods. A long-time evolution of a two-dimensional wave structure is illustrated by the spectra of wave surface and the input and output of energy.
Population viability and connectivity of the Louisiana black bear (Ursus americanus luteolus)
Laufenberg, Jared S.; Clark, Joseph D.
2014-01-01
From April 2010 to April 2012, global positioning system (GPS) radio collars were placed on 8 female and 23 male bears ranging from 1 to 11 years of age to develop a step-selection function model to predict routes and rates of interchange. For both males and females, the probability of a step being selected increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. Of 4,000 correlated random walks, the least potential interchange was between TRB and TRC and between UARB and LARB, but the relative potential for natural interchange between UARB and TRC was high. The step-selection model predicted that dispersals between the LARB and UARB populations were infrequent but possible for males and nearly nonexistent for females. No evidence of natural female dispersal between subpopulations has been documented thus far, which is also consistent with model predictions.
Terminal-Area Aircraft Intent Inference Approach Based on Online Trajectory Clustering.
Yang, Yang; Zhang, Jun; Cai, Kai-quan
2015-01-01
Terminal-area aircraft intent inference (T-AII) is a prerequisite to detect and avoid potential aircraft conflict in the terminal airspace. T-AII challenges the state-of-the-art AII approaches due to the uncertainties of air traffic situation, in particular due to the undefined flight routes and frequent maneuvers. In this paper, a novel T-AII approach is introduced to address the limitations by solving the problem with two steps that are intent modeling and intent inference. In the modeling step, an online trajectory clustering procedure is designed for recognizing the real-time available routes in replacing of the missed plan routes. In the inference step, we then present a probabilistic T-AII approach based on the multiple flight attributes to improve the inference performance in maneuvering scenarios. The proposed approach is validated with real radar trajectory and flight attributes data of 34 days collected from Chengdu terminal area in China. Preliminary results show the efficacy of the presented approach.
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Single-particle stochastic heat engine.
Rana, Shubhashis; Pal, P S; Saha, Arnab; Jayannavar, A M
2014-10-01
We have performed an extensive analysis of a single-particle stochastic heat engine constructed by manipulating a Brownian particle in a time-dependent harmonic potential. The cycle consists of two isothermal steps at different temperatures and two adiabatic steps similar to that of a Carnot engine. The engine shows qualitative differences in inertial and overdamped regimes. All the thermodynamic quantities, including efficiency, exhibit strong fluctuations in a time periodic steady state. The fluctuations of stochastic efficiency dominate over the mean values even in the quasistatic regime. Interestingly, our system acts as an engine provided the temperature difference between the two reservoirs is greater than a finite critical value which in turn depends on the cycle time and other system parameters. This is supported by our analytical results carried out in the quasistatic regime. Our system works more reliably as an engine for large cycle times. By studying various model systems, we observe that the operational characteristics are model dependent. Our results clearly rule out any universal relation between efficiency at maximum power and temperature of the baths. We have also verified fluctuation relations for heat engines in time periodic steady state.
Rödder, Dennis; Nekum, Sven; Cord, Anna F; Engler, Jan O
2016-07-01
Climate change and anthropogenic habitat fragmentation are considered major threats for global biodiversity. As a direct consequence, connectivity is increasingly disrupted in many species, which might have serious consequences that could ultimately lead to the extinction of populations. Although a large number of reserves and conservation sites are designated and protected by law, potential habitats acting as inter-population connectivity corridors are, however, mostly ignored in the common practice of environmental planning. In most cases, this is mainly caused by a lack of quantitative measures of functional connectivity available for the planning process. In this study, we highlight the use of fine-scale potential connectivity models (PCMs) derived from multispectral satellite data for the quantification of spatially explicit habitat corridors for matrix-sensitive species of conservation concern. This framework couples a species distribution model with a connectivity model in a two-step framework, where suitability maps from step 1 are transformed into maps of landscape resistance in step 2 filtered by fragmentation thresholds. We illustrate the approach using the sand lizard (Lacerta agilis L.) in the metropolitan area of Cologne, Germany, as a case study. Our model proved to be well suited to identify connected as well as completely isolated populations within the study area. Furthermore, due to its fine resolution, the PCM was also able to detect small linear structures known to be important for sand lizards' inter-population connectivity such as railroad embankments. We discuss the applicability and possible implementation of PCMs to overcome shortcomings in the common practice of environmental impact assessments.
NASA Astrophysics Data System (ADS)
Rödder, Dennis; Nekum, Sven; Cord, Anna F.; Engler, Jan O.
2016-07-01
Climate change and anthropogenic habitat fragmentation are considered major threats for global biodiversity. As a direct consequence, connectivity is increasingly disrupted in many species, which might have serious consequences that could ultimately lead to the extinction of populations. Although a large number of reserves and conservation sites are designated and protected by law, potential habitats acting as inter-population connectivity corridors are, however, mostly ignored in the common practice of environmental planning. In most cases, this is mainly caused by a lack of quantitative measures of functional connectivity available for the planning process. In this study, we highlight the use of fine-scale potential connectivity models (PCMs) derived from multispectral satellite data for the quantification of spatially explicit habitat corridors for matrix-sensitive species of conservation concern. This framework couples a species distribution model with a connectivity model in a two-step framework, where suitability maps from step 1 are transformed into maps of landscape resistance in step 2 filtered by fragmentation thresholds. We illustrate the approach using the sand lizard ( Lacerta agilis L.) in the metropolitan area of Cologne, Germany, as a case study. Our model proved to be well suited to identify connected as well as completely isolated populations within the study area. Furthermore, due to its fine resolution, the PCM was also able to detect small linear structures known to be important for sand lizards' inter-population connectivity such as railroad embankments. We discuss the applicability and possible implementation of PCMs to overcome shortcomings in the common practice of environmental impact assessments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Heng; Endo, Satoshi; Wong, May
Yamaguchi and Feingold (2012) note that the cloud fields in their Weather Research and Forecasting (WRF) large-eddy simulations (LESs) of marine stratocumulus exhibit a strong sensitivity to time stepping choices. In this study, we reproduce and analyze this sensitivity issue using two stratocumulus cases, one marine and one continental. Results show that (1) the sensitivity is associated with spurious motions near the moisture jump between the boundary layer and the free atmosphere, and (2) these spurious motions appear to arise from neglecting small variations in water vapor mixing ratio (qv) in the pressure gradient calculation in the acoustic substepping portionmore » of the integration procedure. We show that this issue is remedied in the WRF dynamical core by replacing the prognostic equation for the potential temperature θ with one for the moist potential temperature θm=θ(1+1.61qv), which allows consistent treatment of moisture in the calculation of pressure during the acoustic substeps. With this modification, the spurious motions and the sensitivity to the time stepping settings (i.e., the dynamic time step length and number of acoustic substeps) are eliminated in both of the example stratocumulus cases. This modification improves the applicability of WRF for LES applications, and possibly other models using similar dynamical core formulations, and also permits the use of longer time steps than in the original code.« less
GIS-based modeling of debris flow processes in an Alpine catchment, Antholz valley, Italy
NASA Astrophysics Data System (ADS)
Sandmeier, Christine; Damm, Bodo; Terhorst, Birgit
2010-05-01
Debris flows are frequent natural hazards in mountain regions, which seriously can threat human lives and economic values. In the European Alps the occurrence of debris flows might even increase with respect to climate change, including permafrost degradation, glacier retreat and variable precipitation patterns. Thus, detailed understanding of process parameters and spatial distribution of debris flows is necessary to take appropriate protection measures for risk assessment. In this context, numerical models have been developed and applied successfully for simulation and prediction of debris-flow hazards and related process areas. In our study a GIS-based model is applied in an alpine catchment to address the following questions: Where are potential initiating areas of debris flows? How much material can be mobilized? What is the influence of topography and precipitation? The study area is located in the Antholz valley in the eastern Alps of Northern Italy. The investigated catchment of the Klammbach creek comprises 6.5 km² and is divided into two sub-catchments. Geologically it is dominated by metamorphic rock and altitudes range between 1310 and 3270 m. In summer 2005 a debris flow of more than 100000 m³ took place, originating from a steep, sparsely vegetated debris cone in the western part of the catchment. According to a regional study, the lower permafrost boundary in this area has risen by 250 m. In a first step, during a field survey, geomorphological mapping was performed, several channel cross-sections were measured and sediment samples were taken. Using mapping results and aerial images, a geomorphological map was created. In further steps, results from the field work, the geomorphological map and existing digital data sets, including a digital elevation model with 2.5 m resolution, are used to derive input data for the modeling of debris flow processes. The model framework ‘r.debrisflow' based on GRASS GIS is applied (Mergili, 2008*), as it is capable of simulating the potential spatial patterns of debris flow deposition, as well as their initiation and movement. Furthermore it is a freely available and opensource software and can thus be improved and extended. ‘r.debrisflow' couples a hydraulic, a slope stability, a sediment transport and a debris flow runout model, which are combined differently in 6 simulation modes. In a first step, model parameters are calibrated using the runout only mode with known parameters of the 2005 debris flow. Finally, the full mode will be used to evaluate the debris-flow potential of the whole catchment. First results from the geomorphological mapping reveal numerous surface forms, like levees, debris flow lobes or scars that indicate past and recent debris flow activity in the area. In both sub-catchments, there are large areas of unconsolidated, sparsely or unvegetated sediments, surrounded by high rock walls, which conduct precipitation rapidly into the debris. The two sub-catchments, however, have different topographic characteristics, which can be analyzed with the model in more detail. In a next step, the potential starting areas of future debris flows shall be identified and the potential amount of mobilized material shall be estimated by the model. *Mergili, M. (2008): Integrated modelling of debris flows with Open Source GIS. Ph.D. thesis. University of Innsbruck. http://www.uibk.ac.at/geographie/personal/mergili/dissertation.pdf
Soibam, Benjamin; Goldfeder, Rachel L.; Manson-Bishop, Claire; Gamblin, Rachel; Pletcher, Scott D.; Shah, Shishir; Gunaratne, Gemunu H.; Roman, Gregg W.
2012-01-01
In open field arenas, Drosophila adults exhibit a preference for arena boundaries over internal walls and open regions. Herein, we investigate the nature of this preference using phenomenological modeling of locomotion to determine whether local arena features and constraints on movement alone are sufficient to drive positional preferences within open field arenas of different shapes and with different internal features. Our model has two components: directional persistence and local wall force. In regions far away from walls, the trajectory is entirely characterized by a directional persistence probability, , for each movement defined by the step size, , and the turn angle, . In close proximity to walls, motion is computed from and a local attractive force which depends on the distance between the fly and points on the walls. The directional persistence probability was obtained experimentally from trajectories of wild type Drosophila in a circular open field arena and the wall force was computed to minimize the difference between the radial distributions from the model and Drosophila in the same circular arena. The two-component model for fly movement was challenged by comparing the positional preferences from the two-component model to wild type Drosophila in a variety of open field arenas. In most arenas there was a strong concordance between the two-component model and Drosophila. In more complex arenas, the model exhibits similar trends, but some significant differences were found. These differences suggest that there are emergent features within these complex arenas that have significance for the fly, such as potential shelter. Hence, the two-component model is an important step in defining how Drosophila interact with their environment. PMID:23071591
Cilurzo, Felisa; Cristiano, Maria Chiara; Di Marzio, Luisa; Cosco, Donato; Carafa, Maria; Ventura, Cinzia Anna; Fresta, Massimo; Paolino, Donatella
2015-01-01
The ability of some surfactants to self-assemble in a water/oil bi-phase environment thus forming supramolecular structure leading to the formation of w/o/w multiple emulsions was investigated. The w/o/w multiple emulsions obtained by self-assembling (one-step preparation method) were compared with those prepared following the traditional two-step procedure. Methyl-nicotinate was used as a hydrophilic model drug. The formation of the multiple emulsion structure was evidenced by optical microscopy, which showed a mean size of the inner oil droplets of 6 μm and 10 μm for one-step and two-step multiple emulsions, respectively. The in vitrobiopharmaceutical features of the various w/o/w multiple emulsion formulations were evaluated by means of viscosimetry studies, drug release and in vitro percutaneous permeation experiments through human stratum corneum and viable epidermis membranes. The self-assembled multiple emulsions allowed a more gradual percutaneous permeation (a zero-order permeation rate) than the two-step ones. The in vivotopical carrier properties of the two different multiple emulsions were evaluated on healthy human volunteers by using the spectrophotometry of reflectance, an in vivonon invasive method. These multiple emulsion systems were also compared with conventional emulsion formulations. Our findings demonstrated that the multiple emulsions obtained by self-assembling were able to provide a more sustained drug delivery into the skin and hence a longer therapeutic action than two-step multiple emulsions and conventional emulsion formulations. Finally, our findings showed that the supramolecular micro-assembly of multiple emulsions was able to influence not only the biopharmaceutical characteristics but also the potential in vivotherapeutic response.
Lenhart, Rachel L.; Smith, Colin R.; Vignos, Michael F.; Kaiser, Jarred; Heiderscheit, Bryan C.; Thelen, Darryl G.
2015-01-01
Interventions used to treat patellofemoral pain in runners are often designed to alter patellofemoral mechanics. This study used a computational model to investigate the influence of two interventions, step rate manipulation and quadriceps strengthening, on patellofemoral contact pressures during running. Running mechanics were analyzed using a lower extremity musculoskeletal model that included a knee with six degree-of-freedom tibiofemoral and patellofemoral joints. An elastic foundation model was used to compute articular contact pressures. The lower extremity model was scaled to anthropometric dimensions of 22 healthy adults, who ran on an instrumented treadmill at 90%, 100% and 110% of their preferred step rate. Numerical optimization was then used to predict the muscle forces, secondary tibiofemoral kinematics and all patellofemoral kinematics that would generate the measured hip, knee and ankle joint accelerations. Mean and peak patella contact pressures reached 5.0 and 9.7 MPa during the midstance phase of running. Increasing step rate by 10% significantly reduced mean contact pressures by 10.4% and contact area by 7.4%, but had small effects on lateral patella translation and tilt. Enhancing vastus medialis strength did not substantially affect pressure magnitudes or lateral patella translation, but did shift contact pressure medially toward the patellar median ridge. Thus, the model suggests that step rate tends to primarily modulate the magnitude of contact pressure and contact area, while vastus medialis strengthening has the potential to alter mediolateral pressure locations. These results are relevant to consider in the design of interventions used to prevent or treat patellofemoral pain in runners. PMID:26070646
Lenhart, Rachel L; Smith, Colin R; Vignos, Michael F; Kaiser, Jarred; Heiderscheit, Bryan C; Thelen, Darryl G
2015-08-20
Interventions used to treat patellofemoral pain in runners are often designed to alter patellofemoral mechanics. This study used a computational model to investigate the influence of two interventions, step rate manipulation and quadriceps strengthening, on patellofemoral contact pressures during running. Running mechanics were analyzed using a lower extremity musculoskeletal model that included a knee with six degree-of-freedom tibiofemoral and patellofemoral joints. An elastic foundation model was used to compute articular contact pressures. The lower extremity model was scaled to anthropometric dimensions of 22 healthy adults, who ran on an instrumented treadmill at 90%, 100% and 110% of their preferred step rate. Numerical optimization was then used to predict the muscle forces, secondary tibiofemoral kinematics and all patellofemoral kinematics that would generate the measured primary hip, knee and ankle joint accelerations. Mean and peak patella contact pressures reached 5.0 and 9.7MPa during the midstance phase of running. Increasing step rate by 10% significantly reduced mean contact pressures by 10.4% and contact area by 7.4%, but had small effects on lateral patellar translation and tilt. Enhancing vastus medialis strength did not substantially affect pressure magnitudes or lateral patellar translation, but did shift contact pressure medially toward the patellar median ridge. Thus, the model suggests that step rate tends to primarily modulate the magnitude of contact pressure and contact area, while vastus medialis strengthening has the potential to alter mediolateral pressure locations. These results are relevant to consider in the design of interventions used to prevent or treat patellofemoral pain in runners. Copyright © 2015 Elsevier Ltd. All rights reserved.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance.
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents.
Noise Enhances Action Potential Generation in Mouse Sensory Neurons via Stochastic Resonance
Onorato, Irene; D'Alessandro, Giuseppina; Di Castro, Maria Amalia; Renzi, Massimiliano; Dobrowolny, Gabriella; Musarò, Antonio; Salvetti, Marco; Limatola, Cristina; Crisanti, Andrea; Grassi, Francesca
2016-01-01
Noise can enhance perception of tactile and proprioceptive stimuli by stochastic resonance processes. However, the mechanisms underlying this general phenomenon remain to be characterized. Here we studied how externally applied noise influences action potential firing in mouse primary sensory neurons of dorsal root ganglia, modelling a basic process in sensory perception. Since noisy mechanical stimuli may cause stochastic fluctuations in receptor potential, we examined the effects of sub-threshold depolarizing current steps with superimposed random fluctuations. We performed whole cell patch clamp recordings in cultured neurons of mouse dorsal root ganglia. Noise was added either before and during the step, or during the depolarizing step only, to focus onto the specific effects of external noise on action potential generation. In both cases, step + noise stimuli triggered significantly more action potentials than steps alone. The normalized power norm had a clear peak at intermediate noise levels, demonstrating that the phenomenon is driven by stochastic resonance. Spikes evoked in step + noise trials occur earlier and show faster rise time as compared to the occasional ones elicited by steps alone. These data suggest that external noise enhances, via stochastic resonance, the recruitment of transient voltage-gated Na channels, responsible for action potential firing in response to rapid step-wise depolarizing currents. PMID:27525414
Carol Clausen
2004-01-01
In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229
Pulsed electrodeposition of two-dimensional Ag nanostructures on Au(111).
Borissov, D; Tsekov, R; Freyland, W
2006-08-17
One-step pulsed potential electrodeposition of Ag on Au(111) in the underpotential deposition (UPD) region has been studied in 0.5 mM Ag2SO4 + 0.1 M H2SO4 aqueous electrolyte at various pulse durations from 0.2 to 500 ms. Evolution of the deposited Ag nanostructures was followed by in situ scanning tunneling microscopy (STM) and by measurement of the respective current transients. At short pulse durations a relatively high number density (4 x 10(11) cm(-2)) of two-dimensional Ag clusters with a narrow size and distance distribution is observed. They exhibit a remarkably high stability characterized by a dissolution potential which lies about 200 mV more anodically than the typical potential of Ag-(1 x 1) monolayer dissolution. To elucidate the underlying nucleation and growth mechanism, two models have been considered: two-dimensional lattice incorporation and a newly developed coupled diffusion-adsorption model. The first one yields a qualitative description of the current transients, whereas the second one is in nearly quantitative agreement with the experimental data. In this model the transformation of a Ag-(3 x 3) into a Ag-(1 x 1) structure indicated in the cyclic voltammogram (peaks at 520 vs 20 mV) is taken into account.
NASA Astrophysics Data System (ADS)
Sumi, Tomonari; Okumoto, Atsushi; Goto, Hitoshi; Sekino, Hideo
2017-10-01
A two-step subdiffusion behavior of lateral movement of transmembrane proteins in plasma membranes has been observed by using single-molecule experiments. A nested double-compartment model where large compartments are divided into several smaller ones has been proposed in order to explain this observation. These compartments are considered to be delimited by membrane-skeleton "fences" and membrane-protein "pickets" bound to the fences. We perform numerical simulations of a master equation using a simple two-dimensional lattice model to investigate the heterogeneous diffusion dynamics behavior of transmembrane proteins within plasma membranes. We show that the experimentally observed two-step subdiffusion process can be described using fence and picket models combined with decreased local diffusivity of transmembrane proteins in the vicinity of the pickets. This allows us to explain the two-step subdiffusion behavior without explicitly introducing nested double compartments.
NASA Astrophysics Data System (ADS)
Aylor, Donald E.; Boehm, Matthew T.; Shields, Elson J.
2006-07-01
The extensive adoption of genetically modified crops has led to a need to understand better the dispersal of pollen in the atmosphere because of the potential for unwanted movement of genetic traits via pollen flow in the environment. The aerial dispersal of maize pollen was studied by comparing the results of a Lagrangian stochastic (LS) model with pollen concentration measurements made over cornfields using a combination of tower-based rotorod samplers and airborne radio-controlled remote-piloted vehicles (RPVs) outfitted with remotely operated pollen samplers. The comparison between model and measurements was conducted in two steps. In the first step, the LS model was used in combination with the rotorod samplers to estimate the pollen release rate Q for each sampling period. In the second step, a modeled value for the concentration Cmodel, corresponding to each RPV measured value Cmeasure, was calculated by simulating the RPV flight path through the LS model pollen plume corresponding to the atmospheric conditions, field geometry, wind direction, and source strength. The geometric mean and geometric standard deviation of the ratio Cmodel/Cmeasure over all of the sampling periods, except those determined to be upwind of the field, were 1.42 and 4.53, respectively, and the lognormal distribution corresponding to these values was found to fit closely the PDF of Cmodel/Cmeasure. Model output was sensitive to the turbulence parameters, with a factor-of-100 difference in the average value of Cmodel over the range of values encountered during the experiment. In comparison with this large potential variability, it is concluded that the average factor of 1.4 between Cmodel and Cmeasure found here indicates that the LS model is capable of accurately predicting, on average, concentrations over a range of atmospheric conditions.
RFID in the blood supply chain--increasing productivity, quality and patient safety.
Briggs, Lynne; Davis, Rodeina; Gutierrez, Alfonso; Kopetsky, Matthew; Young, Kassandra; Veeramani, Raj
2009-01-01
As part of an overall design of a new, standardized RFID-enabled blood transfusion medicine supply chain, an assessment was conducted for two hospitals: the University of Iowa Hospital and Clinics (UIHC) and Mississippi Baptist Health System (MBHS). The main objectives of the study were to assess RFID technological and economic feasibility, along with possible impacts to productivity, quality and patient safety. A step-by-step process analysis focused on the factors contributing to process "pain points" (errors, inefficiency, product losses). A process re-engineering exercise produced blueprints of RFID-enabled processes to alleviate or eliminate those pain-points. In addition, an innovative model quantifying the potential reduction in adverse patient effects as a result of RFID implementation was created, allowing improvement initiatives to focus on process areas with the greatest potential impact to patient safety. The study concluded that it is feasible to implement RFID-enabled processes, with tangible improvements to productivity and safety expected. Based on a comprehensive cost/benefit model, it is estimated for a large hospital (UIHC) to recover investment from implementation within two to three years, while smaller hospitals may need longer to realize ROI. More importantly, the study estimated that RFID technology could reduce morbidity and mortality effects substantially among patients receiving transfusions.
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A.; Rusyn, Ivan; Tropsha, Alexander
2009-01-01
Background Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public–private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. Objective A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. Methods and results A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC50) and in vivo rodent median lethal dose (LD50) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure–activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD50 values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC50 and LD50. However, a linear IC50 versus LD50 correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC50 and LD50 values: One group comprises compounds with linear IC50 versus LD50 relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD50 values from chemical descriptors. All models were extensively validated using special protocols. Conclusions The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity. PMID:19672406
Zhu, Hao; Ye, Lin; Richard, Ann; Golbraikh, Alexander; Wright, Fred A; Rusyn, Ivan; Tropsha, Alexander
2009-08-01
Accurate prediction of in vivo toxicity from in vitro testing is a challenging problem. Large public-private consortia have been formed with the goal of improving chemical safety assessment by the means of high-throughput screening. A wealth of available biological data requires new computational approaches to link chemical structure, in vitro data, and potential adverse health effects. A database containing experimental cytotoxicity values for in vitro half-maximal inhibitory concentration (IC(50)) and in vivo rodent median lethal dose (LD(50)) for more than 300 chemicals was compiled by Zentralstelle zur Erfassung und Bewertung von Ersatz- und Ergaenzungsmethoden zum Tierversuch (ZEBET; National Center for Documentation and Evaluation of Alternative Methods to Animal Experiments). The application of conventional quantitative structure-activity relationship (QSAR) modeling approaches to predict mouse or rat acute LD(50) values from chemical descriptors of ZEBET compounds yielded no statistically significant models. The analysis of these data showed no significant correlation between IC(50) and LD(50). However, a linear IC(50) versus LD(50) correlation could be established for a fraction of compounds. To capitalize on this observation, we developed a novel two-step modeling approach as follows. First, all chemicals are partitioned into two groups based on the relationship between IC(50) and LD(50) values: One group comprises compounds with linear IC(50) versus LD(50) relationships, and another group comprises the remaining compounds. Second, we built conventional binary classification QSAR models to predict the group affiliation based on chemical descriptors only. Third, we developed k-nearest neighbor continuous QSAR models for each subclass to predict LD(50) values from chemical descriptors. All models were extensively validated using special protocols. The novelty of this modeling approach is that it uses the relationships between in vivo and in vitro data only to inform the initial construction of the hierarchical two-step QSAR models. Models resulting from this approach employ chemical descriptors only for external prediction of acute rodent toxicity.
Balasubramanian, Saravana K; Coger, Robin N
2005-01-01
Bioartificial liver devices (BALs) have proven to be an effective bridge to transplantation for cases of acute liver failure. Enabling the long-term storage of these devices using a method such as cryopreservation will ensure their easy off the shelf availability. To date, cryopreservation of liver cells has been attempted for both single cells and sandwich cultures. This study presents the potential of using computational modeling to help develop a cryopreservation protocol for storing the three dimensional BAL: Hepatassist. The focus is upon determining the thermal and concentration profiles as the BAL is cooled from 37 degrees C-100 degrees C, and is completed in two steps: a cryoprotectant loading step and a phase change step. The results indicate that, for the loading step, mass transfer controls the duration of the protocol, whereas for the phase change step, when mass transfer is assumed negligible, the latent heat released during freezing is the control factor. The cryoprotocol that is ultimately proposed considers time, cooling rate, and the temperature gradients that the cellular space is exposed to during cooling. To our knowledge, this study is the first reported effort toward designing an effective protocol for the cryopreservation of a three-dimensional BAL device.
Bubble suspension rheology and implications for conduit flow
NASA Astrophysics Data System (ADS)
Llewellin, E. W.; Manga, M.
2005-05-01
Bubbles are ubiquitous in magma during eruption and influence the rheology of the suspension. Despite this, bubble-suspension rheology is routinely ignored in conduit-flow and eruption models, potentially impairing accuracy and resulting in the loss of important phenomenological richness. The omission is due, in part, to a historical confusion in the literature concerning the effect of bubbles on the rheology of a liquid. This confusion has now been largely resolved and recently published studies have identified two viscous regimes: in regime 1, the viscosity of the two-phase (magma-gas) suspension increases as gas volume fraction ϕ increases; in regime 2, the viscosity of the suspension decreases as ϕ increases. The viscous regime for a deforming bubble suspension can be determined by calculating two dimensionless numbers, the capillary number Ca and the dynamic capillary number Cd. We provide a didactic explanation of how to include the effect of bubble-suspension rheology in continuum, conduit-flow models. Bubble-suspension rheology is reviewed and a practical rheological model is presented, followed by an algorithmic, step-by-step guide to including the rheological model in conduit-flow models. Preliminary results from conduit-flow models which have implemented the model presented are discussed and it is concluded that the effect of bubbles on magma rheology may be important in nature and results in a decrease of at least 800 m in calculated fragmentation-depth and an increase of between 40% and 250% in calculated eruption-rate compared with the assumption of Newtonian rheology.
Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion
NASA Astrophysics Data System (ADS)
Ranganathan, Madhav; Weeks, John D.
2014-05-01
We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.
NASA Astrophysics Data System (ADS)
Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.
2017-12-01
Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.
An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion
Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.
2017-01-01
In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
NASA Astrophysics Data System (ADS)
Fokin, Vladimir B.; Povarnitsyn, Mikhail E.; Levashov, Pavel R.
2017-02-01
We elaborated two numerical methods, two-temperature hydrodynamics and hybrid two-temperature molecular dynamics, which take into account basic mechanisms of a metal target response to ultrashort laser irradiation. The model used for the description of the electronic subsystem is identical for both approaches, while the ionic part is defined by an equation of state in hydrodynamics and by an interatomic potential in molecular dynamics. Since the phase diagram of the equation of state and corresponding potential match reasonably well, the dynamics of laser ablation obtained by both methods is quite similar. This correspondence can be considered as a first step towards the development of a self-consistent combined model. Two important processes are highlighted in simulations of double-pulse ablation: (1) the crater depth decrease as a result of recoil flux formation in the nascent plume when the delay between the pulses increases; (2) the plume reheating by the second pulse that gives rise to two- three-fold growth of the electron temperature with the delay varying from 0 to 200 ps.
Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.
Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk
2018-07-01
Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
Quantum Transmission Conditions for Diffusive Transport in Graphene with Steep Potentials
NASA Astrophysics Data System (ADS)
Barletti, Luigi; Negulescu, Claudia
2018-05-01
We present a formal derivation of a drift-diffusion model for stationary electron transport in graphene, in presence of sharp potential profiles, such as barriers and steps. Assuming the electric potential to have steep variations within a strip of vanishing width on a macroscopic scale, such strip is viewed as a quantum interface that couples the classical regions at its left and right sides. In the two classical regions, where the potential is assumed to be smooth, electron and hole transport is described in terms of semiclassical kinetic equations. The diffusive limit of the kinetic model is derived by means of a Hilbert expansion and a boundary layer analysis, and consists of drift-diffusion equations in the classical regions, coupled by quantum diffusive transmission conditions through the interface. The boundary layer analysis leads to the discussion of a four-fold Milne (half-space, half-range) transport problem.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
Tri-Texts: A Potential Next Step for Paired Texts
ERIC Educational Resources Information Center
Ciecierski, Lisa M.; Bintz, William P.
2018-01-01
This article presents the concept of tri-texts as a potential next step from paired texts following a collaborative inquiry with fifth-grade students. Paired texts are two texts intertextually connected, whereas tri-texts are three texts connected this way. The authors begin the article with a short literature review highlighting some of the…
Akam, Thomas; Costa, Rui; Dayan, Peter
2015-01-01
The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806
Two-Step Formal Advertisement: An Examination.
1976-10-01
The purpose of this report is to examine the potential application of the Two-Step Formal Advertisement method of procurement. Emphasis is placed on...Step formal advertising is a method of procurement designed to take advantage of negotiation flexibility and at the same time obtain the benefits of...formal advertising . It is used where the specifications are not sufficiently definite or may be too restrictive to permit full and free competition
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.
2014-03-01
Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.
Gedeon, Patrick C; Thomas, James R; Madura, Jeffry D
2015-01-01
Molecular dynamics simulation provides a powerful and accurate method to model protein conformational change, yet timescale limitations often prevent direct assessment of the kinetic properties of interest. A large number of molecular dynamic steps are necessary for rare events to occur, which allow a system to overcome energy barriers and conformationally transition from one potential energy minimum to another. For many proteins, the energy landscape is further complicated by a multitude of potential energy wells, each separated by high free-energy barriers and each potentially representative of a functionally important protein conformation. To overcome these obstacles, accelerated molecular dynamics utilizes a robust bias potential function to simulate the transition between different potential energy minima. This straightforward approach more efficiently samples conformational space in comparison to classical molecular dynamics simulation, does not require advanced knowledge of the potential energy landscape and converges to the proper canonical distribution. Here, we review the theory behind accelerated molecular dynamics and discuss the approach in the context of modeling protein conformational change. As a practical example, we provide a detailed, step-by-step explanation of how to perform an accelerated molecular dynamics simulation using a model neurotransmitter transporter embedded in a lipid cell membrane. Changes in protein conformation of relevance to the substrate transport cycle are then examined using principle component analysis.
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Effects of age and step length on joint kinetics during stepping task.
Bieryla, Kathleen A; Buffinton, Christine
2015-07-16
Following a balance perturbation, a stepping response is commonly used to regain support, and the distance of the recovery step can vary. To date, no other studies have examined joint kinetics in young and old adults during increasing step distances, when participants are required to bring their rear foot forward. Therefore, the purpose of this study was to examine age-related differences in joint kinetics with increasing step distance. Twenty young and 20 old adults completed the study. Participants completed a step starting from double support, at an initial distance equal to the individual's average step length. The distance was increased by 10% body height until an unsuccessful attempt. A one-way, repeated measures ANOVA was used to determine the effects of age on joint kinetics during the maximum step distance. A two-way, repeated measures, mixed model ANOVA was used to determine the effects of age, step distance, and their interaction on joint kinetics during the first three step distances for all participants. Young adults completed a significantly longer step than old adults. During the maximum step, in general, kinetic measures were greater in the young than in the old. As step distance increased, all but one kinetic measure increased for both young and old adults. This study has shown the ability to discriminate between young and old adults, and could potentially be used in the future to distinguish between fallers and non-fallers. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Pectin gelation with chlorhexidine: Physico-chemical studies in dilute solutions.
Lascol, Manon; Bourgeois, Sandrine; Guillière, Florence; Hangouët, Marie; Raffin, Guy; Marote, Pedro; Lantéri, Pierre; Bordes, Claire
2016-10-05
Low methoxyl pectin is known to gel with divalent cations (e.g. Ca(2+), Zn(2+)). In this study, a new way of pectin gelation in the presence of an active pharmaceutical ingredient, chlorhexidine (CX), was highlighted. Thus chlorhexidine interactions with pectin were investigated and compared with the well-known pectin/Ca(2+) binding model. Gelation mechanisms were studied by several physico-chemical methods such as zeta potential, viscosity, size measurements and binding isotherm was determined by Proton Nuclear Magnetic Resonance Spectroscopy ((1)H NMR). The binding process exhibited similar first two steps for both divalent ions: a stoichiometric monocomplexation of the polymer followed by a dimerization step. However, stronger interactions were observed between pectin and chlorhexidine. Moreover, the dimerization step occurred under stoichiometric conditions with chlorhexidine whereas non-stoichiometric conditions were involved with calcium ions. In the case of chlorhexidine, an additional intermolecular binding occurred in a third step. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dissociative Ionization of Benzene by Electron Impact
NASA Technical Reports Server (NTRS)
Huo, Winifred; Dateo, Christopher; Kwak, Dochan (Technical Monitor)
2002-01-01
We report a theoretical study of the dissociative ionization (DI) of benzene from the low-lying ionization channels. Our approach makes use of the fact that electron motion is much faster than nuclear motion and DI is treated as a two-step process. The first step is electron-impact ionization resulting in an ion with the same nuclear geometry as the neutral molecule. In the second step the nuclei relax from the initial geometry and undergo unimolecular dissociation. For the ionization process we use the improved binary-encounter dipole (iBED) model. For the unimolecular dissociation step, we study the steepest descent reaction path to the minimum of the ion potential energy surface. The path is used to analyze the probability of unimolecular dissociation and to determine the product distributions. Our analysis of the dissociation products and the thresholds of the productions are compared with the result dissociative photoionization measurements of Feng et al. The partial oscillator strengths from Feng et al. are then used in the iBED cross section calculations.
X-1 to X-Wings: Developing a Parametric Cost Model
NASA Technical Reports Server (NTRS)
Sterk, Steve; McAtee, Aaron
2015-01-01
In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.
The Costs and Potential Benefits of Alternative Scholarly Publishing Models
ERIC Educational Resources Information Center
Houghton, John W.
2011-01-01
Introduction: This paper reports on a study undertaken for the UK Joint Information Systems Committee (JISC), which explored the economic implications of alternative scholarly publishing models. Rather than simply summarising the study's findings, this paper focuses on the approach and presents a step-by-step account of the research process,…
NASA Astrophysics Data System (ADS)
McKean, John R.; Johnson, Donn; Taylor, R. Garth
2010-09-01
Choice of the appropriate model of economic behavior is important for the measurement of nonmarket demand and benefits. Several travel cost demand model specifications are currently in use. Uncertainty exists over the efficacy of these approaches, and more theoretical and empirical study is warranted. Thus travel cost models with differing assumptions about labor markets and consumer behavior were applied to estimate the demand for steelhead trout sportfishing on an unimpounded reach of the Snake River near Lewiston, Idaho. We introduce a modified two-step decision model that incorporates endogenous time value using a latent index variable approach. The focus is on the importance of distinguishing between short-run and long-run consumer decision variables in a consistent manner. A modified Barnett two-step decision model was found superior to other models tested.
NASA Astrophysics Data System (ADS)
Ilie, Ioana M.; den Otter, Wouter K.; Briels, Wim J.
2016-02-01
Particles in simulations are traditionally endowed with fixed interactions. While this is appropriate for particles representing atoms or molecules, objects with significant internal dynamics—like sequences of amino acids or even an entire protein—are poorly modelled by invariable particles. We develop a highly coarse grained polymorph patchy particle with the ultimate aim of simulating proteins as chains of particles at the secondary structure level. Conformational changes, e.g., a transition between disordered and β-sheet states, are accommodated by internal coordinates that determine the shape and interaction characteristics of the particles. The internal coordinates, as well as the particle positions and orientations, are propagated by Brownian Dynamics in response to their local environment. As an example of the potential offered by polymorph particles, we model the amyloidogenic intrinsically disordered protein α-synuclein, involved in Parkinson's disease, as a single particle with two internal states. The simulations yield oligomers of particles in the disordered state and fibrils of particles in the "misfolded" cross-β-sheet state. The aggregation dynamics is complex, as aggregates can form by a direct nucleation-and-growth mechanism and by two-step-nucleation through conversions between the two cluster types. The aggregation dynamics is complex, with fibrils formed by direct nucleation-and-growth, by two-step-nucleation through the conversion of an oligomer and by auto-catalysis of this conversion.
A model-based exploration of the role of pattern generating circuits during locomotor adaptation.
Marjaninejad, Ali; Finley, James M
2016-08-01
In this study, we used a model-based approach to explore the potential contributions of central pattern generating circuits (CPGs) during adaptation to external perturbations during locomotion. We constructed a neuromechanical modeled of locomotion using a reduced-phase CPG controller and an inverted pendulum mechanical model. Two different forms of locomotor adaptation were examined in this study: split-belt treadmill adaptation and adaptation to a unilateral, elastic force field. For each simulation, we first examined the effects of phase resetting and varying the model's initial conditions on the resulting adaptation. After evaluating the effect of phase resetting on the adaptation of step length symmetry, we examined the extent to which the results from these simple models could explain previous experimental observations. We found that adaptation of step length symmetry during split-belt treadmill walking could be reproduced using our model, but this model failed to replicate patterns of adaptation observed in response to force field perturbations. Given that spinal animal models can adapt to both of these types of perturbations, our findings suggest that there may be distinct features of pattern generating circuits that mediate each form of adaptation.
Linking pedestrian flow characteristics with stepping locomotion
NASA Astrophysics Data System (ADS)
Wang, Jiayue; Boltes, Maik; Seyfried, Armin; Zhang, Jun; Ziemer, Verena; Weng, Wenguo
2018-06-01
While properties of human traffic flow are described by speed, density and flow, the locomotion of pedestrian is based on steps. To relate characteristics of human locomotor system with properties of human traffic flow, this paper aims to connect gait characteristics like step length, step frequency, swaying amplitude and synchronization with speed and density and thus to build a ground for advanced pedestrian models. For this aim, observational and experimental study on the single-file movement of pedestrians at different densities is conducted. Methods to measure step length, step frequency, swaying amplitude and step synchronization are proposed by means of trajectories of the head. Mathematical models for the relations of step length or frequency and speed are evaluated. The problem how step length and step duration are influenced by factors like body height and density is investigated. It is shown that the effect of body height on step length and step duration changes with density. Furthermore, two different types of step in-phase synchronization between two successive pedestrians are observed and the influence of step synchronization on step length is examined.
Hierarchical Regularity in Multi-Basin Dynamics on Protein Landscapes
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Kostov, Konstatin S.; Komatsuzaki, Tamiki
2004-04-01
We analyze time series of potential energy fluctuations and principal components at several temperatures for two kinds of off-lattice 46-bead models that have two distinctive energy landscapes. The less-frustrated "funnel" energy landscape brings about stronger nonstationary behavior of the potential energy fluctuations at the folding temperature than the other, rather frustrated energy landscape at the collapse temperature. By combining principal component analysis with an embedding nonlinear time-series analysis, it is shown that the fast fluctuations with small amplitudes of 70-80% of the principal components cause the time series to become almost "random" in only 100 simulation steps. However, the stochastic feature of the principal components tends to be suppressed through a wide range of degrees of freedom at the transition temperature.
ERIC Educational Resources Information Center
Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.
2007-01-01
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error…
NASA Astrophysics Data System (ADS)
Sapilewski, Glen Alan
The Satellite Test of the Equivalence Principle (STEP) is a modern version of Galileo's experiment of dropping two objects from the leaning tower of Pisa. The Equivalence Principle states that all objects fall with the same acceleration, independent of their composition. The primary scientific objective of STEP is to measure a possible violation of the Equivalence Principle one million times better than the best ground based tests. This extraordinary sensitivity is made possible by using cryogenic differential accelerometers in the space environment. Critical to the STEP experiment is a sound fundamental understanding of the behavior of the superconducting magnetic linear bearings used in the accelerometers. We have developed a theoretical bearing model and a precision measuring system with which to validate the model. The accelerometers contain two concentric hollow cylindrical test masses, of different materials, each levitated and constrained to axial motion by a superconducting magnetic bearing. Ensuring that the bearings satisfy the stringent mission specifications requires developing new testing apparatus and methods. The bearing is tested using an actively-controlled table which tips it relative to gravity. This balances the magnetic forces from the bearing against a component of gravity. The magnetic force profile of the bearing can be mapped by measuring the tilt necessary to position the test mass at various locations. An operational bearing has been built and is being used to verify the theoretical levitation models. The experimental results obtained from the bearing test apparatus were inconsistent with the previous models used for STEP bearings. This led to the development of a new bearing model that includes the influence of surface current variations in the bearing wires and the effect of the superconducting transformer. The new model, which has been experimentally verified, significantly improves the prediction of levitation current, accurately estimates the relationship between tilting and translational modes, and predicts the dependence of radial mode frequencies on the bearing current. In addition, we developed a new model for the forces produced by trapped magnetic fluxons, a potential source of imperfections in the bearing. This model estimates the forces between magnetic fluxons trapped in separate superconducting objects.
NASA Technical Reports Server (NTRS)
Batterson, J. G.
1986-01-01
The successful parametric modeling of the aerodynamics for an airplane operating at high angles of attack or sideslip is performed in two phases. First the aerodynamic model structure must be determined and second the associated aerodynamic parameters (stability and control derivatives) must be estimated for that model. The purpose of this paper is to document two versions of a stepwise regression computer program which were developed for the determination of airplane aerodynamic model structure and to provide two examples of their use on computer generated data. References are provided for the application of the programs to real flight data. The two computer programs that are the subject of this report, STEP and STEPSPL, are written in FORTRAN IV (ANSI l966) compatible with a CDC FTN4 compiler. Both programs are adaptations of a standard forward stepwise regression algorithm. The purpose of the adaptation is to facilitate the selection of a adequate mathematical model of the aerodynamic force and moment coefficients of an airplane from flight test data. The major difference between STEP and STEPSPL is in the basis for the model. The basis for the model in STEP is the standard polynomial Taylor's series expansion of the aerodynamic function about some steady-state trim condition. Program STEPSPL utilizes a set of spline basis functions.
Optimizing DNA nanotechnology through coarse-grained modeling: a two-footed DNA walker.
Ouldridge, Thomas E; Hoare, Rollo L; Louis, Ard A; Doye, Jonathan P K; Bath, Jonathan; Turberfield, Andrew J
2013-03-26
DNA has enormous potential as a programmable material for creating artificial nanoscale structures and devices. For more complex systems, however, rational design and optimization can become difficult. We have recently proposed a coarse-grained model of DNA that captures the basic thermodynamic, structural, and mechanical changes associated with the fundamental process in much of DNA nanotechnology, the formation of duplexes from single strands. In this article, we demonstrate that the model can provide powerful insight into the operation of complex nanotechnological systems through a detailed investigation of a two-footed DNA walker that is designed to step along a reusable track, thereby offering the possibility of optimizing the design of such systems. We find that applying moderate tension to the track can have a large influence on the operation of the walker, providing a bias for stepping forward and helping the walker to recover from undesirable overstepped states. Further, we show that the process by which spent fuel detaches from the walker can have a significant impact on the rebinding of the walker to the track, strongly influencing walker efficiency and speed. Finally, using the results of the simulations, we propose a number of modifications to the walker to improve its operation.
Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations
Eikenberry, Steffen E.; Marmarelis, Vasilis Z.
2015-01-01
We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480
Dell, Gary S.; Martin, Nadine; Schwartz, Myrna F.
2010-01-01
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model’s characterizations of the subjects’ naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task. PMID:21085621
Correlation between Gas Bubble Formation and Hydrogen Evolution Reaction Kinetics at Nanoelectrodes.
Chen, Qianjin; Luo, Long
2018-04-17
We report the correlation between H 2 gas bubble formation potential and hydrogen evolution reaction (HER) activity for Au and Pt nanodisk electrodes (NEs). Microkinetic models were formulated to obtain the HER kinetic information for individual Au and Pt NEs. We found that the rate-determining steps for the HER at Au and Pt NEs were the Volmer step and the Heyrovsky step, respectively. More interestingly, the standard rate constant ( k 0 ) of the rate-determining step was found to vary over 2 orders of magnitude for the same type of NEs. The observed variations indicate the HER activity heterogeneity at the nanoscale. Furthermore, we discovered a linear relationship between bubble formation potential ( E bubble ) and log( k 0 ) with a slope of 125 mV/decade for both Au and Pt NEs. As log ( k 0 ) increases, E bubble shifts linearly to more positive potentials, meaning NEs with higher HER activities form H 2 bubbles at less negative potentials. Our theoretical model suggests that such linear relationship is caused by the similar critical bubble formation condition for Au and Pt NEs with varied sizes. Our results have potential implications for using gas bubble formation to evaluate the HER activity distribution of nanoparticles in an ensemble.
The hyperbolic step potential: Anti-bound states, SUSY partners and Wigner time delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gadella, M.; Kuru, Ş.; Negro, J., E-mail: jnegro@fta.uva.es
We study the scattering produced by a one dimensional hyperbolic step potential, which is exactly solvable and shows an unusual interest because of its asymmetric character. The analytic continuation of the scattering matrix in the momentum representation has a branch cut and an infinite number of simple poles on the negative imaginary axis which are related with the so called anti-bound states. This model does not show resonances. Using the wave functions of the anti-bound states, we obtain supersymmetric (SUSY) partners which are the series of Rosen–Morse II potentials. We have computed the Wigner reflection and transmission time delays formore » the hyperbolic step and such SUSY partners. Our results show that the more bound states a partner Hamiltonian has the smaller is the time delay. We also have evaluated time delays for the hyperbolic step potential in the classical case and have obtained striking similitudes with the quantum case. - Highlights: • The scattering matrix of hyperbolic step potential is studied. • The scattering matrix has a branch cut and an infinite number of poles. • The poles are associated to anti-bound states. • Susy partners using antibound states are computed. • Wigner time delays for the hyperbolic step and partner potentials are compared.« less
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
On-orbit assembly of a team of flexible spacecraft using potential field based method
NASA Astrophysics Data System (ADS)
Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping
2017-04-01
In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.
Study of in-medium {\\eta }^{\\prime} properties in the (γ, \\eta ^{\\prime} p) reaction on nuclei
NASA Astrophysics Data System (ADS)
Paryev, E. Ya
2016-01-01
We study the near-threshold photoproduction of {η }\\prime mesons from nuclei in coincidence with forward going protons in the kinematical conditions of the Crystal Barrel/TAPS experiment, recently performed at ELSA. The calculations have been performed within a collision model based on the nuclear spectral function. The model accounts for both the primary γ p\\to η \\prime p process and the two-step intermediate nucleon rescattering processes as well as the effect of the nuclear η \\prime mean-field potential. We calculate the exclusive η \\prime kinetic energy distributions for the 12C(γ, η \\prime p) reaction for different scenarios of η \\prime in-medium modification. We find that the considered two-step rescattering mechanism plays an insignificant role in η \\prime p photoproduction off the carbon target. We also demonstrate that the calculated η \\prime kinetic energy distributions in primary photon-proton η \\prime p production reveal strong sensitivity to the depth of the real η \\prime potential at normal nuclear matter density (or to the η \\prime in-medium mass shift) in the studied incident photon energy regime. Therefore, such observables may be useful to help determine the above η \\prime in-medium renormalization from the comparison of the results of our calculations with the data from the CBELSA/TAPS experiment. In addition, we show that these distributions are also strongly influenced by the momentum-dependent optical potential, which the outgoing participant proton feels inside the carbon nucleus. This potential should be taken into account in the analysis of these data with the aim to obtain information on the η \\prime modification in cold nuclear matter.
Drupsteen, Linda; Groeneweg, Jop; Zwetsloot, Gerard I J M
2013-01-01
Many incidents have occurred because organisations have failed to learn from lessons of the past. This means that there is room for improvement in the way organisations analyse incidents, generate measures to remedy identified weaknesses and prevent reoccurrence: the learning from incidents process. To improve that process, it is necessary to gain insight into the steps of this process and to identify factors that hinder learning (bottlenecks). This paper presents a model that enables organisations to analyse the steps in a learning from incidents process and to identify the bottlenecks. The study describes how this model is used in a survey and in 3 exploratory case studies in The Netherlands. The results show that there is limited use of learning potential, especially in the evaluation stage. To improve learning, an approach that considers all steps is necessary.
Predictive Structure-Based Toxicology Approaches To Assess the Androgenic Potential of Chemicals.
Trisciuzzi, Daniela; Alberga, Domenico; Mansouri, Kamel; Judson, Richard; Novellino, Ettore; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio
2017-11-27
We present a practical and easy-to-run in silico workflow exploiting a structure-based strategy making use of docking simulations to derive highly predictive classification models of the androgenic potential of chemicals. Models were trained on a high-quality chemical collection comprising 1689 curated compounds made available within the CoMPARA consortium from the US Environmental Protection Agency and were integrated with a two-step applicability domain whose implementation had the effect of improving both the confidence in prediction and statistics by reducing the number of false negatives. Among the nine androgen receptor X-ray solved structures, the crystal 2PNU (entry code from the Protein Data Bank) was associated with the best performing structure-based classification model. Three validation sets comprising each 2590 compounds extracted by the DUD-E collection were used to challenge model performance and the effectiveness of Applicability Domain implementation. Next, the 2PNU model was applied to screen and prioritize two collections of chemicals. The first is a small pool of 12 representative androgenic compounds that were accurately classified based on outstanding rationale at the molecular level. The second is a large external blind set of 55450 chemicals with potential for human exposure. We show how the use of molecular docking provides highly interpretable models and can represent a real-life option as an alternative nontesting method for predictive toxicology.
NASA Astrophysics Data System (ADS)
Sokolović, I.; Mali, P.; Odavić, J.; Radošević, S.; Medvedeva, S. Yu.; Botha, A. E.; Shukrinov, Yu. M.; Tekić, J.
2017-08-01
The devil's staircase structure arising from the complete mode locking of an entirely nonchaotic system, the overdamped dc+ac driven Frenkel-Kontorova model with deformable substrate potential, was observed. Even though no chaos was found, a hierarchical ordering of the Shapiro steps was made possible through the use of a previously introduced continued fraction formula. The absence of chaos, deduced here from Lyapunov exponent analyses, can be attributed to the overdamped character and the Middleton no-passing rule. A comparative analysis of a one-dimensional stack of Josephson junctions confirmed the disappearance of chaos with increasing dissipation. Other common dynamic features were also identified through this comparison. A detailed analysis of the amplitude dependence of the Shapiro steps revealed that only for the case of a purely sinusoidal substrate potential did the relative sizes of the steps follow a Farey sequence. For nonsinusoidal (deformed) potentials, the symmetry of the Stern-Brocot tree, depicting all members of particular Farey sequence, was seen to be increasingly broken, with certain steps being more prominent and their relative sizes not following the Farey rule.
Particle simulation of Coulomb collisions: Comparing the methods of Takizuka and Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Chiaming; Lin, Tungyou; Caflisch, Russel
2008-04-20
The interactions of charged particles in a plasma are governed by long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and statistical error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge
NASA Astrophysics Data System (ADS)
Hidetsugu Sakaguchi,; Sahim M. Kourkouss,
2010-06-01
We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.
Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states
NASA Astrophysics Data System (ADS)
Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.
1999-02-01
The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.
Dholabhai, Pratik P; Aguiar, Jeffery A; Misra, Amit; Uberuaga, Blas P
2014-05-21
Due to reduced dimensions and increased interfacial content, nanocomposite oxides offer improved functionalities in a wide variety of advanced technological applications, including their potential use as radiation tolerant materials. To better understand the role of interface structures in influencing the radiation damage tolerance of oxides, we have conducted atomistic calculations to elucidate the behavior of radiation-induced point defects (vacancies and interstitials) at interface steps in a model CeO2/SrTiO3 system. We find that atomic-scale steps at the interface have substantial influence on the defect behavior, which ultimately dictate the material performance in hostile irradiation environments. Distinctive steps react dissimilarly to cation and anion defects, effectively becoming biased sinks for different types of defects. Steps also attract cation interstitials, leaving behind an excess of immobile vacancies. Further, defects introduce significant structural and chemical distortions primarily at the steps. These two factors are plausible origins for the enhanced amorphization at steps seen in our recent experiments. The present work indicates that comprehensive examination of the interaction of radiation-induced point defects with the atomic-scale topology and defect structure of heterointerfaces is essential to evaluate the radiation tolerance of nanocomposites. Finally, our results have implications for other applications, such as fast ion conduction.
NASA Astrophysics Data System (ADS)
Capocchiano, F.; Ravanelli, R.; Crespi, M.
2017-11-01
Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.
A model-reduction approach to the micromechanical analysis of polycrystalline materials
NASA Astrophysics Data System (ADS)
Michel, Jean-Claude; Suquet, Pierre
2016-03-01
The present study is devoted to the extension to polycrystals of a model-reduction technique introduced by the authors, called the nonuniform transformation field analysis (NTFA). This new reduced model is obtained in two steps. First the local fields of internal variables are decomposed on a reduced basis of modes as in the NTFA. Second the dissipation potential of the phases is replaced by its tangent second-order (TSO) expansion. The reduced evolution equations of the model can be entirely expressed in terms of quantities which can be pre-computed once for all. Roughly speaking, these pre-computed quantities depend only on the average and fluctuations per phase of the modes and of the associated stress fields. The accuracy of the new NTFA-TSO model is assessed by comparison with full-field simulations on two specific applications, creep of polycrystalline ice and response of polycrystalline copper to a cyclic tension-compression test. The new reduced evolution equations is faster than the full-field computations by two orders of magnitude in the two examples.
The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...
Photovoltaic central station step and touch potential considerations in grounding system design
NASA Technical Reports Server (NTRS)
Engmann, G.
1983-01-01
The probability of hazardous step and touch potentials is an important consideration in central station grounding system design. Steam turbine generating station grounding system design is based on accepted industry practices and there is extensive in-service experience with these grounding systems. A photovoltaic (PV) central station is a relatively new concept and there is limited experience with PV station grounding systems. The operation and physical configuration of a PV central station is very different from a steam electric station. A PV station bears some similarity to a substation and the PV station step and touch potentials might be addressed as they are in substation design. However, the PV central station is a generating station and it is appropriate to examine the effect that the differences and similarities of the two types of generating stations have on step and touch potential considerations.
Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling
Cordell, Lindrith
1994-01-01
Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.
Toward Scientific Numerical Modeling
NASA Technical Reports Server (NTRS)
Kleb, Bil
2007-01-01
Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.
Sebold, Miriam; Schad, Daniel J; Nebe, Stephan; Garbusow, Maria; Jünger, Elisabeth; Kroemer, Nils B; Kathmann, Norbert; Zimmermann, Ulrich S; Smolka, Michael N; Rapp, Michael A; Heinz, Andreas; Huys, Quentin J M
2016-07-01
Behavioral choice can be characterized along two axes. One axis distinguishes reflexive, model-free systems that slowly accumulate values through experience and a model-based system that uses knowledge to reason prospectively. The second axis distinguishes Pavlovian valuation of stimuli from instrumental valuation of actions or stimulus-action pairs. This results in four values and many possible interactions between them, with important consequences for accounts of individual variation. We here explored whether individual variation along one axis was related to individual variation along the other. Specifically, we asked whether individuals' balance between model-based and model-free learning was related to their tendency to show Pavlovian interferences with instrumental decisions. In two independent samples with a total of 243 participants, Pavlovian-instrumental transfer effects were negatively correlated with the strength of model-based reasoning in a two-step task. This suggests a potential common underlying substrate predisposing individuals to both have strong Pavlovian interference and be less model-based and provides a framework within which to interpret the observation of both effects in addiction.
Volume Diffusion Growth Kinetics and Step Geometry in Crystal Growth
NASA Technical Reports Server (NTRS)
Mazuruk, Konstantin; Ramachandran, Narayanan
1998-01-01
The role of step geometry in two-dimensional stationary volume diff4sion process used in crystal growth kinetics models is investigated. Three different interface shapes: a) a planar interface, b) an equidistant hemispherical bumps train tAx interface, and c) a train of right angled steps, are used in this comparative study. The ratio of the super-saturation to the diffusive flux at the step position is used as a control parameter. The value of this parameter can vary as much as 50% for different geometries. An approximate analytical formula is derived for the right angled steps geometry. In addition to the kinetic models, this formula can be utilized in macrostep growth models. Finally, numerical modeling of the diffusive and convective transport for equidistant steps is conducted. In particular, the role of fluid flow resulting from the advancement of steps and its contribution to the transport of species to the steps is investigated.
Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie
2013-06-04
Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Baum, R. L.; Stark, B.; Smith, J. B.; Michel, A.
2015-12-01
Previous USGS research on landslide potential in hillside areas and coastal bluffs around Puget Sound, WA, has identified rainfall thresholds and antecedent moisture conditions that correlate with heightened probability of shallow landslides. However, physically based assessments of temporal and spatial variability in landslide potential require improved quantitative characterization of the hydrologic controls on landslide initiation in heterogeneous geologic materials. Here we present preliminary steps towards integrating monitoring of hydrologic response with physically based numerical modeling to inform the development of a landslide warning system for a railway corridor along the eastern shore of Puget Sound. We instrumented two sites along the steep coastal bluffs - one active landslide and one currently stable slope with the potential for failure - to monitor rainfall, soil-moisture, and pore-pressure dynamics in near-real time. We applied a distributed model of variably saturated subsurface flow for each site, with heterogeneous hydraulic-property distributions based on our detailed site characterization of the surficial colluvium and the underlying glacial-lacustrine deposits that form the bluffs. We calibrated the model with observed volumetric water content and matric potential time series, then used simulated pore pressures from the calibrated model to calculate the suction stress and the corresponding distribution of the factor of safety against landsliding with the infinite slope approximation. Although the utility of the model is limited by uncertainty in the deeper groundwater flow system, the continuous simulation of near-surface hydrologic response can help to quantify the temporal variations in the potential for shallow slope failures at the two sites. Thus the integration of near-real time monitoring and physically based modeling contributes a useful tool towards mitigating hazards along the Puget Sound railway corridor.
Improved Ionospheric Electrodynamic Models and Application to Calculating Joule Heating Rates
NASA Technical Reports Server (NTRS)
Weimer, D. R.
2004-01-01
Improved techniques have been developed for empirical modeling of the high-latitude electric potentials and magnetic field aligned currents (FAC) as a function of the solar wind parameters. The FAC model is constructed using scalar magnetic Euler potentials, and functions as a twin to the electric potential model. The improved models have more accurate field values as well as more accurate boundary locations. Non-linear saturation effects in the solar wind-magnetosphere coupling are also better reproduced. The models are constructed using a hybrid technique, which has spherical harmonic functions only within a small area at the pole. At lower latitudes the potentials are constructed from multiple Fourier series functions of longitude, at discrete latitudinal steps. It is shown that the two models can be used together in order to calculate the total Poynting flux and Joule heating in the ionosphere. An additional model of the ionospheric conductivity is not required in order to obtain the ionospheric currents and Joule heating, as the conductivity variations as a function of the solar inclination are implicitly contained within the FAC model's data. The models outputs are shown for various input conditions, as well as compared with satellite measurements. The calculations of the total Joule heating are compared with results obtained by the inversion of ground-based magnetometer measurements. Like their predecessors, these empirical models should continue to be a useful research and forecast tools.
Structure of turbulent non-premixed flames modeled with two-step chemistry
NASA Technical Reports Server (NTRS)
Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.
1992-01-01
Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.
Analytical model of tilted driver–pickup coils for eddy current nondestructive evaluation
NASA Astrophysics Data System (ADS)
Cao, Bing-Hua; Li, Chao; Fan, Meng-Bao; Ye, Bo; Tian, Gui-Yun
2018-03-01
A driver-pickup probe possesses better sensitivity and flexibility due to individual optimization of a coil. It is frequently observed in an eddy current (EC) array probe. In this work, a tilted non-coaxial driver-pickup probe above a multilayered conducting plate is analytically modeled with spatial transformation for eddy current nondestructive evaluation. Basically, the core of the formulation is to obtain the projection of magnetic vector potential (MVP) from the driver coil onto the vector along the tilted pickup coil, which is divided into two key steps. The first step is to make a projection of MVP along the pickup coil onto a horizontal plane, and the second one is to build the relationship between the projected MVP and the MVP along the driver coil. Afterwards, an analytical model for the case of a layered plate is established with the reflection and transmission theory of electromagnetic fields. The calculated values from the resulting model indicate good agreement with those from the finite element model (FEM) and experiments, which validates the developed analytical model. Project supported by the National Natural Science Foundation of China (Grant Nos. 61701500, 51677187, and 51465024).
A New Insight into the Mechanism of NADH Model Oxidation by Metal Ions in Non-Alkaline Media.
Yang, Jin-Dong; Chen, Bao-Long; Zhu, Xiao-Qing
2018-06-11
For a long time, it has been controversial that the three-step (e-H+-e) or two-step (e-H•) mechanism was used for the oxidations of NADH and its models by metal ions in non-alkaline media. The latter mechanism has been accepted by the majority of researchers. In this work, 1-benzyl-1,4-dihydronicotinamide (BNAH) and 1-phenyl-l,4-dihydronicotinamide (PNAH) are used as NADH models, and ferrocenium (Fc+) metal ion as an electron acceptor. The kinetics for oxidations of the NADH models by Fc+ in pure acetonitrile were monitored by using UV-Vis absorption and quadratic relationship between of kobs and the concentrations of NADH models were found for the first time. The rate expression of the reactions developed according to the three-step mechanism is quite consistent with the quadratic curves. The rate constants, thermodynamic driving forces and KIEs of each elementary step for the reactions were estimated. All the results supported the three-step mechanism. The intrinsic kinetic barriers of the proton transfer from BNAH+• to BNAH and the hydrogen atom transfer from BNAH+• to BNAH+• were estimated, the results showed that the former is 11.8 kcal/mol, and the latter is larger than 24.3 kcal/mol. It is the large intrinsic kinetic barrier of the hydrogen atom transfer that makes the reactions choose the three-step rather than two-step mechanism. Further investigation of the factors affecting the intrinsic kinetic barrier of chemical reactions indicated that the large intrinsic kinetic barrier of the hydrogen atom transfer originated from the repulsion of positive charges between BNAH+• and BNAH+•. The greatest contribution of this work is the discovery of the quadratic dependence of kobs on the concentrations of the NADH models, which is inconsistent with the conventional viewpoint of the "two-step mechanism" on the oxidations of NADH and its models by metal ions in the non-alkaline media.
NASA Technical Reports Server (NTRS)
Hersh, Alan S.; Tam, Christopher
2009-01-01
Two significant advances have been made in the application of computational aeroacoustics methodology to acoustic liner technology. The first is that temperature effects for discrete sound are not the same as for broadband noise. For discrete sound, the normalized resistance appears to be insensitive to temperature except at high SPL. However, reactance is lower, significantly lower in absolute value, at high temperature. The second is the numerical investigation the acoustic performance of a liner by direct numerical simulation. Liner impedance is affected by the non-uniformity of the incident sound waves. This identifies the importance of pressure gradient. Preliminary design one and two-dimensional impedance models were developed to design sound absorbing liners in the presence of intense sound and grazing flow. The two-dimensional model offers the potential to empirically determine incident sound pressure face-plate distance from resonator orifices. This represents an important initial step in improving our understanding of how to effectively use the Dean Two-Microphone impedance measurement method.
Helsloot, Kaat; Walraevens, Mieke; Besauw, Saskia Van; Van Parys, An-Sofie; Devos, Hanne; Holsbeeck, Ann Van; Roelens, Kristien
2017-05-01
to develop a set of quality indicators for postnatal care after discharge from the hospital, using a systematic approach. key elements of qualitative postnatal care were defined by performing a systematic review and the literature was searched for potential indicators (step 1). The potential indicators were evaluated by five criteria (validity, reliability, sensitivity, feasibility and acceptability) and by making use of the 'Appraisal of Guidelines for Research and Evaluation', the AIRE-instrument (step 2). In a modified Delphi-survey, the quality indicators were presented to a panel of experts in the field of postnatal care using an online tool (step 3). The final results led to a Flemish model of postnatal care (step 4). Flanders, Belgium PARTICIPANTS: health care professionals, representatives of health care organisations and policy makers with expertise in the field of postnatal care. after analysis 57 research articles, 10 reviews, one book and eight other documents resulted in 150 potential quality indicators in seven critical care domains. Quality assessment of the indicators resulted in 58 concept quality indicators which were presented to an expert-panel of health care professionals. After two Delphi-rounds, 30 quality indicators (six structure, 17 process, and seven outcome indicators) were found appropriate to monitor and improve the quality of postnatal care after discharge from the hospital. KEY CONCLUSIONS AND IMPLICATIONS FOR CLINICAL PRACTICE: the quality indicators resulted in a Flemish model of qualitative postnatal care that was implemented by health authorities as a minimum standard in the context of shortened length of stay. Postnatal care should be adjusted to a flexible length of stay and start in pregnancy with an individualised care plan that follows mother and new-born throughout pregnancy, childbirth and postnatal period. Criteria for discharge and local protocols about the organisation and content of care are essential to facilitate continuity of care. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bernath, Katrin; Roschewitz, Anna
2008-11-01
The extension of contingent valuation models with an attitude-behavior based framework has been proposed in order to improve the descriptive and predictive ability of the models. This study examines the potential of the theory of planned behavior to explain willingness to pay (WTP) in a contingent valuation survey of the recreational benefits of the Zurich city forests. Two aspects of WTP responses, protest votes and bid levels, were analyzed separately. In both steps, models with and without the psychological predictors proposed by the theory of planned behavior were compared. Whereas the inclusion of the psychological predictors significantly improved explanations of protest votes, their ability to improve the performance of the model explaining bid levels was limited. The results indicate that the interpretation of bid levels as behavioral intention may not be appropriate and that the potential of the theory of planned behavior to improve contingent valuation models depends on which aspect of WTP responses is examined.
Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian
2016-05-18
Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4.31 % and 5.26 % for the two pulse sequences, respectively, while the ICCs were 0.99 for both. For rotation measures, the CVs were 3.19 % and 2.44 % for the two pulse sequences with the ICCs being 0.98 and 0.97, respectively. A novel biplanar imaging approach also yielded high reliability with mean CVs of 2.66 % and 3.39 % for translation in the x- and z-planes, respectively, and ICCs of 0.97 in both planes. This work provides basic proof-of-concept for a reliable marker-less non-ionizing-radiation-based quasi-dynamic motion quantification technique that can potentially be developed into a tool for real-time joint kinematics analysis.
Quantization of charged fields in the presence of critical potential steps
NASA Astrophysics Data System (ADS)
Gavrilov, S. P.; Gitman, D. M.
2016-02-01
QED with strong external backgrounds that can create particles from the vacuum is well developed for the so-called t -electric potential steps, which are time-dependent external electric fields that are switched on and off at some time instants. However, there exist many physically interesting situations where external backgrounds do not switch off at the time infinity. E.g., these are time-independent nonuniform electric fields that are concentrated in restricted space areas. The latter backgrounds represent a kind of spatial x -electric potential steps for charged particles. They can also create particles from the vacuum, the Klein paradox being closely related to this process. Approaches elaborated for treating quantum effects in the t -electric potential steps are not directly applicable to the x -electric potential steps and their generalization for x -electric potential steps was not sufficiently developed. We believe that the present work represents a consistent solution of the latter problem. We have considered a canonical quantization of the Dirac and scalar fields with x -electric potential step and have found in- and out-creation and annihilation operators that allow one to have particle interpretation of the physical system under consideration. To identify in- and out-operators we have performed a detailed mathematical and physical analysis of solutions of the relativistic wave equations with an x -electric potential step with subsequent QFT analysis of correctness of such an identification. We elaborated a nonperturbative (in the external field) technique that allows one to calculate all characteristics of zero-order processes, such, for example, scattering, reflection, and electron-positron pair creation, without radiation corrections, and also to calculate Feynman diagrams that describe all characteristics of processes with interaction between the in-, out-particles and photons. These diagrams have formally the usual form, but contain special propagators. Expressions for these propagators in terms of in- and out-solutions are presented. We apply the elaborated approach to two popular exactly solvable cases of x -electric potential steps, namely, to the Sauter potential and to the Klein step.
Three-dimensional modelling of slope stability using the Local Factor of Safety concept
NASA Astrophysics Data System (ADS)
Moradi, Shirin; Huisman, Sander; Beck, Martin; Vereecken, Harry; Class, Holger
2017-04-01
Slope stability is governed by coupled hydrological and mechanical processes. The slope stability depends on the effective stress, which in turn depends on the weight of the soil and the matrix potential. Therefore, changes in water content and matrix potential associated with infiltration will affect slope stability. Most available models describing these coupled hydro-mechanical processes either rely on a one- or two-dimensional representation of hydrological and mechanical properties and processes, which obviously is a strong simplification in many applications. Therefore, the aim of this work is to develop a three-dimensional hydro-mechanical model that is able to capture the effect of spatial and temporal variability of both mechanical and hydrological parameters on slope stability. For this, we rely on DuMux, which is a free and open-source simulator for flow and transport processes in porous media that facilitates coupling of different model approaches and offers flexibility for model development. We use the Richards equation to model unsaturated water flow. The simulated water content and matrix potential distribution is used to calculate the effective stress. We only consider linear elasticity and solve for statically admissible fields of stress and displacement without invoking failure or the redistribution of post-failure stress or displacement. The Local Factor of Safety concept is used to evaluate slope stability in order to overcome some of the main limitations of commonly used methods based on limit equilibrium considerations. In a first step, we compared our model implementation with a 2D benchmark model that was implemented in COMSOL Multiphysics. In a second step, we present in-silico experiments with the newly developed 3D model to show the effect of slope morphology, spatial variability in hydraulic and mechanical material properties, and spatially variable soil depth on simulated slope stability. It is expected that this improved physically-based three-dimensional hydro-mechanical model is able to provide more reliable slope instability predictions in more complex situations.
Deliberative Rhetoric as a Step in Organizational Crisis Management: Exxon as a Case Study.
ERIC Educational Resources Information Center
Johnson, Darrin; Sellnow, Timothy
1995-01-01
Explains that when organizations face crises, their rhetorical response often follows two steps: assessment of causes leading to the crisis, and a search for potential solutions and preventive measures for the future. States that epideictic rhetoric designed to sustain or regain the organization's reputation is effective in both steps. Examines…
Political Regime and Human Capital: A Cross-Country Analysis
ERIC Educational Resources Information Center
Klomp, Jeroen; de Haan, Jakob
2013-01-01
We examine the relationship between different dimensions of the political regime in place and human capital using a two-step structural equation model. In the first step, we employ factor analysis on 16 human capital indicators to construct two new human capital measures (basic and advanced human capital). In the second step, we estimate the…
Assessment of Managed Aquifer Recharge Site Suitability Using a GIS and Modeling.
Russo, Tess A; Fisher, Andrew T; Lockwood, Brian S
2015-01-01
We completed a two-step regional analysis of a coastal groundwater basin to (1) assess regional suitability for managed aquifer recharge (MAR), and (2) quantify the relative impact of MAR activities on groundwater levels and sea water intrusion. The first step comprised an analysis of surface and subsurface hydrologic properties and conditions, using a geographic information system (GIS). Surface and subsurface data coverages were compiled, georeferenced, reclassified, and integrated (including novel approaches for combining related datasets) to derive a spatial distribution of MAR suitability values. In the second step, results from the GIS analysis were used with a regional groundwater model to assess the hydrologic impact of potential MAR placement and operating scenarios. For the region evaluated in this study, the Pajaro Valley Groundwater Basin, California, GIS results suggest that about 7% (15 km2) of the basin may be highly suitable for MAR. Modeling suggests that simulated MAR projects placed near the coast help to reduce sea water intrusion more rapidly, but these projects also result in increased groundwater flows to the ocean. In contrast, projects placed farther inland result in more long-term reduction in sea water intrusion and less groundwater flowing to the ocean. This work shows how combined GIS analysis and modeling can assist with regional water supply planning, including evaluation of options for enhancing groundwater resources. © 2014, National Ground Water Association.
Particle Simulation of Coulomb Collisions: Comparing the Methods of Takizuka & Abe and Nanbu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C; Lin, T; Caflisch, R
2007-05-22
The interactions of charged particles in a plasma are in a plasma is governed by the long-range Coulomb collision. We compare two widely used Monte Carlo models for Coulomb collisions. One was developed by Takizuka and Abe in 1977, the other was developed by Nanbu in 1997. We perform deterministic and stochastic error analysis with respect to particle number and time step. The two models produce similar stochastic errors, but Nanbu's model gives smaller time step errors. Error comparisons between these two methods are presented.
NASA Astrophysics Data System (ADS)
Scharfenberg, Franz-Josef; Bogner, Franz X.
2011-08-01
Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.
de Kam, Digna; Roelofs, Jolanda M B; Geurts, Alexander C H; Weerdesteyn, Vivian
2018-01-01
To determine the predictive value of leg and trunk inclination angles at stepping-foot contact for the capacity to recover from a backward balance perturbation with a single step in people after stroke. Twenty-four chronic stroke survivors and 21 healthy controls were included in a cross-sectional study. We studied reactive stepping responses by subjecting participants to multidirectional stance perturbations at different intensities on a translating platform. In this paper we focus on backward perturbations. Participants were instructed to recover from the perturbations with maximally one step. A trial was classified as 'success' if balance was restored according to this instruction. We recorded full-body kinematics and computed: 1) body configuration parameters at first stepping-foot contact (leg and trunk inclination angles) and 2) spatiotemporal step parameters (step onset, step length, step duration and step velocity). We identified predictors of balance recovery capacity using a stepwise logistic regression. Perturbation intensity was also included as a predictor. The model with spatiotemporal parameters (perturbation intensity, step length and step duration) could correctly classify 85% of the trials as success or fail (Nagelkerke R2 = 0.61). In the body configuration model (Nagelkerke R2 = 0.71), perturbation intensity and leg and trunk angles correctly classified the outcome of 86% of the recovery attempts. The goodness of fit was significantly higher for the body configuration model compared to the model with spatiotemporal variables (p<0.01). Participant group and stepping leg (paretic or non-paretic) did not significantly improve the explained variance of the final body configuration model. Body configuration at stepping-foot contact is a valid and clinically feasible indicator of backward fall risk in stroke survivors, given its potential to be derived from a single sagittal screenshot.
Integrating geological archives and climate models for the mid-Pliocene warm period.
Haywood, Alan M; Dowsett, Harry J; Dolan, Aisling M
2016-02-16
The mid-Pliocene Warm Period (mPWP) offers an opportunity to understand a warmer-than-present world and assess the predictive ability of numerical climate models. Environmental reconstruction and climate modelling are crucial for understanding the mPWP, and the synergy of these two, often disparate, fields has proven essential in confirming features of the past and in turn building confidence in projections of the future. The continual development of methodologies to better facilitate environmental synthesis and data/model comparison is essential, with recent work demonstrating that time-specific (time-slice) syntheses represent the next logical step in exploring climate change during the mPWP and realizing its potential as a test bed for understanding future climate change.
Integrating geological archives and climate models for the mid-Pliocene warm period
Haywood, Alan M.; Dowsett, Harry J.; Dolan, Aisling M.
2016-01-01
The mid-Pliocene Warm Period (mPWP) offers an opportunity to understand a warmer-than-present world and assess the predictive ability of numerical climate models. Environmental reconstruction and climate modelling are crucial for understanding the mPWP, and the synergy of these two, often disparate, fields has proven essential in confirming features of the past and in turn building confidence in projections of the future. The continual development of methodologies to better facilitate environmental synthesis and data/model comparison is essential, with recent work demonstrating that time-specific (time-slice) syntheses represent the next logical step in exploring climate change during the mPWP and realizing its potential as a test bed for understanding future climate change. PMID:26879640
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
van Limburg, Maarten; Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-08-13
It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed "basic stakeholder analysis," stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES
The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...
Jiang, Hanlun; Sheong, Fu Kit; Zhu, Lizhe; Gao, Xin; Bernauer, Julie; Huang, Xuhui
2015-07-01
Argonaute (Ago) proteins and microRNAs (miRNAs) are central components in RNA interference, which is a key cellular mechanism for sequence-specific gene silencing. Despite intensive studies, molecular mechanisms of how Ago recognizes miRNA remain largely elusive. In this study, we propose a two-step mechanism for this molecular recognition: selective binding followed by structural re-arrangement. Our model is based on the results of a combination of Markov State Models (MSMs), large-scale protein-RNA docking, and molecular dynamics (MD) simulations. Using MSMs, we identify an open state of apo human Ago-2 in fast equilibrium with partially open and closed states. Conformations in this open state are distinguished by their largely exposed binding grooves that can geometrically accommodate miRNA as indicated in our protein-RNA docking studies. miRNA may then selectively bind to these open conformations. Upon the initial binding, the complex may perform further structural re-arrangement as shown in our MD simulations and eventually reach the stable binary complex structure. Our results provide novel insights in Ago-miRNA recognition mechanisms and our methodology holds great potential to be widely applied in the studies of other important molecular recognition systems.
Comparison of dynamic treatment regimes via inverse probability weighting.
Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M
2006-03-01
Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.
NASA Astrophysics Data System (ADS)
Wegehenkel, Martin
As a result of a new agricultural funding policy established in 1992 by the European Community, it was assumed that up to 15-20% of arable land would have been set aside in the next years in the new federal states of north-eastern Germany, for example, Brandenburg. As one potential land use option, afforestation of these set aside areas was discussed to obtain deciduous forests. Since the mean annual precipitation in north-eastern Germany, Brandenburg is relatively low (480-530 mm y -1), an increase in interception and evapotranspiration loss by forests compared to arable land would lead to a reduction in ground water recharge. Experimental evidence to determine effects of such land use changes are rarely available. Therefore, there is a need for indirect methods to estimate the impact of afforestation on the water balance of catchments. In this paper, a conceptual hydrological model was verified and calibrated in two steps using data from the Stobber-catchment located in Brandenburg. In the first step, model outputs like daily evapotranspiration rates and soil water contents were verified on the basis of experimental data sets from two test locations. One test site with the land use arable land was located within the Stobber-catchment. The other test site with pine forest was located near by the catchment. In the second step, the model was used to estimate the impact of afforestation on catchment water balance and discharge. For that purpose, the model was calibrated against daily discharge measurements for the period 1995-1997. For a simple afforestation scenario, it was assumed that the area of forest increases from 34% up to 80% of the catchment area. The impact of this change in forest cover proportion was analyzed using the calibrated model. In case of increasing the proportion of forest cover in the catchment due to the scenario afforestation, the model predicts a reduction in discharge and an increase in evapotranspiration.
NASA Astrophysics Data System (ADS)
Dannberg, J.; Heister, T.; Grove, R. R.; Gassmoeller, R.; Spiegelman, M. W.; Bangerth, W.
2017-12-01
Earth's surface shows many features whose genesis can only be understood through the interplay of geodynamic and thermodynamic models. This is particularly important in the context of melt generation and transport: Mantle convection determines the distribution of temperature and chemical composition, the melting process itself is then controlled by the thermodynamic relations and in turn influences the properties and the transport of melt. Here, we present our extension of the community geodynamics code ASPECT, which solves the equations of coupled magma/mantle dynamics, and allows to integrate different parametrizations of reactions and phase transitions: They may alternatively be implemented as simple analytical expressions, look-up tables, or computed by a thermodynamics software. As ASPECT uses a variety of numerical methods and solvers, this also gives us the opportunity to compare different approaches of modelling the melting process. In particular, we will elaborate on the spatial and temporal resolution that is required to accurately model phase transitions, and show the potential of adaptive mesh refinement when applied to melt generation and transport. We will assess the advantages and disadvantages of iterating between fluid dynamics and chemical reactions derived from thermodynamic models within each time step, or decoupling them, allowing for different time step sizes. Beyond that, we will expand on the functionality required for an interface between computational thermodynamics and fluid dynamics models from the geodynamics side. Finally, using a simple example of melting of a two-phase, two-component system, we compare different time-stepping and solver schemes in terms of accuracy and efficiency, in dependence of the time scales of fluid flow and chemical reactions relative to each other. Our software provides a framework to integrate thermodynamic models in high resolution, 3d simulations of coupled magma/mantle dynamics, and can be used as a tool to study links between physical processes and geochemical signals in the Earth.
NASA Astrophysics Data System (ADS)
Titov, V. S.; Mikic, Z.; Torok, T.; Linker, J.
2016-12-01
Many existing models of solar flares and coronal mass ejections (CMEs) assume a key role of magnetic flux ropes in these phenomena. It is therefore important to have efficient methods for constructing flux-rope configurations consistent with the observed photospheric magnetic data and morphology of CMEs. As our new step in this direction, we propose an analytical formulation that succinctly represents the magnetic field of a thin flux rope, which has an axis of arbitrary shape and a circular cross-section with the diameter slowly varying along the axis. This representation implies also that the flux rope carries axial current I and axial flux F, so that the respective magnetic field is a curl of the sum of toroidal and poloidal vector potentials proportional to I and F, respectively. Each of the two potentials is individually expressed in terms of a modified Biot-Savart law with separate kernels, both regularized at the rope axis. We argue that the proposed representation is flexible enough to be used in MHD simulations for initializing pre-eruptive configurations in the low corona or post-eruptive configurations (interplanetary CMEs) in the heliosphere. We discuss the potential advantages of our approach, and the subsequent steps to be performed, to develop a fully operative and highly competitive method compared to existing methods. Research supported by NSF, NASA's HSR and LWS Programs, and AFOSR.
Atmospheric flow over two-dimensional bluff surface obstructions
NASA Technical Reports Server (NTRS)
Bitte, J.; Frost, W.
1976-01-01
The phenomenon of atmospheric flow over a two-dimensional surface obstruction, such as a building (modeled as a rectangular block, a fence or a forward-facing step), is analyzed by three methods: (1) an inviscid free streamline approach, (2) a turbulent boundary layer approach using an eddy viscosity turbulence model and a horizontal pressure gradient determined by the inviscid model, and (3) an approach using the full Navier-Stokes equations with three turbulence models; i.e., an eddy viscosity model, a turbulence kinetic-energy model and a two-equation model with an additional transport equation for the turbulence length scale. A comparison of the performance of the different turbulence models is given, indicating that only the two-equation model adequately accounts for the convective character of turbulence. Turbulence flow property predictions obtained from the turbulence kinetic-energy model with prescribed length scale are only insignificantly better than those obtained from the eddy viscosity model. A parametric study includes the effects of the variation of the characteristics parameters of the assumed logarithmic approach velocity profile. For the case of the forward-facing step, it is shown that in the downstream flow region an increase of the surface roughness gives rise to higher turbulence levels in the shear layer originating from the step corner.
Digital Learning Material for Student-Directed Model Building in Molecular Biology
ERIC Educational Resources Information Center
Aegerter-Wilmsen, Tinri; Coppens, Marjolijn; Janssen, Fred; Hartog, Rob; Bisseling, Ton
2005-01-01
The building of models to explain data and make predictions constitutes an important goal in molecular biology research. To give students the opportunity to practice such model building, two digital cases had previously been developed in which students are guided to build a model step by step. In this article, the development and initial…
LaPlante, Kerry L; Rybak, Michael J; Tsuji, Brian; Lodise, Thomas P; Kaatz, Glenn W
2007-04-01
The potential for resistance development in Streptococcus pneumoniae secondary to exposure to gatifloxacin, gemifloxacin, levofloxacin, and moxifloxacin at various levels was examined at high inoculum (10(8.5) to 10(9) log10 CFU/ml) over 96 h in an in vitro pharmacodynamic (PD) model using two fluoroquinolone-susceptible isolates. The pharmacokinetics of each drug was simulated to provide a range of free areas under the concentration-time curves (fAUC) that correlated with various fluoroquinolone doses. Potential first (parC and parE)- and second-step (gyrA and gyrB) mutations in isolates with raised MICs were identified by sequence analysis. PD models simulating fAUC/MICs of 51 and
Van Calster, B; Bobdiwala, S; Guha, S; Van Hoorde, K; Al-Memar, M; Harvey, R; Farren, J; Kirk, E; Condous, G; Sur, S; Stalder, C; Timmerman, D; Bourne, T
2016-11-01
A uniform rationalized management protocol for pregnancies of unknown location (PUL) is lacking. We developed a two-step triage protocol to select PUL at high risk of ectopic pregnancy (EP), based on serum progesterone level at presentation (step 1) and the serum human chorionic gonadotropin (hCG) ratio, defined as the ratio of hCG at 48 h to hCG at presentation (step 2). This was a cohort study of 2753 PUL (301 EP), involving a secondary analysis of prospectively and consecutively collected PUL data from two London-based university teaching hospitals. Using a chronological split we used 1449 PUL for development and 1304 for validation. We aimed to assign PUL as low risk with high confidence (high negative predictive value (NPV)) while classifying most EP as high risk (high sensitivity). The first triage step assigned PUL as low risk using a threshold of serum progesterone at presentation. The remaining PUL were triaged using a novel logistic regression risk model based on hCG ratio and initial serum progesterone (second step), defining low risk as an estimated EP risk of < 5%. On validation, initial serum progesterone ≤ 2 nmol/L (step 1) classified 16.1% PUL as low risk. Second-step classification with the risk model selected an additional 46.0% of all PUL as low risk. Overall, the two-step protocol classified 62.1% of PUL as low risk, with an NPV of 98.6% and a sensitivity of 92.0%. When the risk model was used in isolation (i.e. without the first step), 60.5% of PUL were classified as low risk with 99.1% NPV and 94.9% sensitivity. PUL can be classified efficiently into being either high or low risk for complications using a two-step protocol involving initial progesterone and hCG levels and the hCG ratio. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.; Koren, V.
2008-12-01
A prototype 4DVAR (four-dimensional variational) data assimilator for gridded Sacramento soil-moisture accounting and kinematic-wave routing models in the Hydrology Laboratory's Research Distributed Hydrologic Model (HL-RDHM) has been developed. The prototype assimilates streamflow and in-situ soil moisture data and adjusts gridded precipitation and climatological potential evaporation data to reduce uncertainty in the model initial conditions for improved monitoring and prediction of streamflow and soil moisture at the outlet and interior locations within the catchment. Due to large degrees of freedom involved, data assimilation (DA) into distributed hydrologic models is complex. To understand and assess sensitivity of the performance of DA to uncertainties in the model initial conditions and in the data, two synthetic experiments have been carried out in an ensemble framework. Results from the synthetic experiments shed much light on the potential and limitations with DA into distributed models. For initial real-world assessment, the prototype DA has also been applied to the headwater basin at Eldon near the Oklahoma-Arkansas border. We present these results and describe the next steps.
Voon, V; Baek, K; Enander, J; Worbe, Y; Morris, L S; Harrison, N A; Robbins, T W; Rück, C; Daw, N
2015-11-03
Our decisions are based on parallel and competing systems of goal-directed and habitual learning, systems which can be impaired in pathological behaviours. Here we focus on the influence of motivation and compare reward and loss outcomes in subjects with obsessive-compulsive disorder (OCD) on model-based goal-directed and model-free habitual behaviours using the two-step task. We further investigate the relationship with acquisition learning using a one-step probabilistic learning task. Forty-eight OCD subjects and 96 healthy volunteers were tested on a reward and 30 OCD subjects and 53 healthy volunteers on the loss version of the two-step task. Thirty-six OCD subjects and 72 healthy volunteers were also tested on a one-step reversal task. OCD subjects compared with healthy volunteers were less goal oriented (model-based) and more habitual (model-free) to reward outcomes with a shift towards greater model-based and lower habitual choices to loss outcomes. OCD subjects also had enhanced acquisition learning to loss outcomes on the one-step task, which correlated with goal-directed learning in the two-step task. OCD subjects had greater stay behaviours or perseveration in the one-step task irrespective of outcome. Compulsion severity was correlated with habitual learning in the reward condition. Obsession severity was correlated with greater switching after loss outcomes. In healthy volunteers, we further show that greater reward magnitudes are associated with a shift towards greater goal-directed learning further emphasizing the role of outcome salience. Our results highlight an important influence of motivation on learning processes in OCD and suggest that distinct clinical strategies based on valence may be warranted.
Rahaman, Obaidur; Estrada, Trilce P.; Doren, Douglas J.; Taufer, Michela; Brooks, Charles L.; Armen, Roger S.
2011-01-01
The performance of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for “step 2 discrimination” were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only “interacting” ligand atoms as the “effective size” of the ligand, and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and five-fold cross validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new dataset (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ dataset where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts. PMID:21644546
Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S
2011-09-26
The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand efficiencies is most relevant to real-world drug design efforts.
Lombroso, Paul J.; Ogren, Marilee; Kurup, Pradeep; Nairn, Angus C.
2016-01-01
This commentary focuses on potential molecular mechanisms related to the dysfunctional synaptic plasticity that is associated with neurodegenerative disorders such as Alzheimer’s disease and Parkinson’s disease. Specifically, we focus on the role of striatal-enriched protein tyrosine phosphatase (STEP) in modulating synaptic function in these illnesses. STEP affects neuronal communication by opposing synaptic strengthening and does so by dephosphorylating several key substrates known to control synaptic signaling and plasticity. STEP levels are elevated in brains from patients with Alzheimer’s and Parkinson’s disease. Studies in model systems have found that high levels of STEP result in internalization of glutamate receptors as well as inactivation of ERK1/2, Fyn, Pyk2, and other STEP substrates necessary for the development of synaptic strengthening. We discuss the search for inhibitors of STEP activity that may offer potential treatments for neurocognitive disorders that are characterized by increased STEP activity. Future studies are needed to examine the mechanisms of differential and region-specific changes in STEP expression pattern, as such knowledge could lead to targeted therapies for disorders involving disrupted STEP activity. PMID:29098072
Studies on the electron acceptors of photosystem two
NASA Astrophysics Data System (ADS)
Bowden, Simon John
The differences in temperature dependent behaviour and microwave power saturation characteristics between the g=1.9 and g=1.8 QA -Fe2+ signals are described. The dependence of these behaviourial differences on the presence or absence of bicarbonate is emphasised. By studying the EPR signals of QA-Fe2+, Q-Fe2+, Q-Fe2+TBTQ- and the oxidised non-haem iron I have found that detergent solubilisation of BBY PS2 preparations with the detergent OGP, at pH 6.0, results in loss of bicarbonate binding. New preparations, including a dodecylmaltoside prepared CP47, CP4 3, D1, D2, cytochrome bgsg complex, are described which at pH 7.5 retain native bicarbonate binding. These preparations provide a new system for studies into the "bicarbonate effect" because bicarbonate depletion can now be achieved without displacement by another anion. The new OGP particles have been used to investigate both the split pheophytin signal and the two step redox titration phenomenon associated with this signal. The low potential step of the titration was concluded to be independent of the QA/QA- mid-point potential but was found to be linked to the ability to photoreduce pheophytin; once the low potential component, suggested here to be the fluorescence quencher QL, was reduced, pheophytin photoreduction increased. A model is described to explain the two step titration and, from analysis of the signal splitting in +/- HCO3- samples, a possible structural role for bicarbonate is proposed. I have probed the structure of the PS2 electron acceptor region with the protease trypsin. The QA, iron-semiquinone; oxidised non-haem iron and cytochrome bss, EPR signals were all found to be susceptible to trypsin damage, while oxygen evolution with ferricyanide was enhanced by protease treatment. The protective effect of calcium ions against trypsin damage was demonstrated and a possible Ca2+ binding site in the binding region identified.
Onto the stability analysis of hyperbolic secant-shaped Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Sabari, S.; Murali, R.
2018-05-01
We analyze the stability of the hyperbolic secant-shaped attractive Bose-Einstein condensate in the absence of external trapping potential. The appropriate theoretical model for the system is described by the nonlinear mean-field Gross-Pitaevskii equation with time varying two-body interaction effects. Using the variational method, the stability of the system is analyzed under the influence of time varying two-body interactions. Further we confirm that the stability of the attractive condensate increases by considering the hyperbolic secant-shape profile instead of Gaussian shape. The analytical results are compared with the numerical simulation by employing the split-step Crank-Nicholson method.
Descriptive vs. mechanistic network models in plant development in the post-genomic era.
Davila-Velderrain, J; Martinez-Garcia, J C; Alvarez-Buylla, E R
2015-01-01
Network modeling is now a widespread practice in systems biology, as well as in integrative genomics, and it constitutes a rich and diverse scientific research field. A conceptually clear understanding of the reasoning behind the main existing modeling approaches, and their associated technical terminologies, is required to avoid confusions and accelerate the transition towards an undeniable necessary more quantitative, multidisciplinary approach to biology. Herein, we focus on two main network-based modeling approaches that are commonly used depending on the information available and the intended goals: inference-based methods and system dynamics approaches. As far as data-based network inference methods are concerned, they enable the discovery of potential functional influences among molecular components. On the other hand, experimentally grounded network dynamical models have been shown to be perfectly suited for the mechanistic study of developmental processes. How do these two perspectives relate to each other? In this chapter, we describe and compare both approaches and then apply them to a given specific developmental module. Along with the step-by-step practical implementation of each approach, we also focus on discussing their respective goals, utility, assumptions, and associated limitations. We use the gene regulatory network (GRN) involved in Arabidopsis thaliana Root Stem Cell Niche patterning as our illustrative example. We show that descriptive models based on functional genomics data can provide important background information consistent with experimentally supported functional relationships integrated in mechanistic GRN models. The rationale of analysis and modeling can be applied to any other well-characterized functional developmental module in multicellular organisms, like plants and animals.
Li, R C
1996-01-01
Antibiotic-bacterium interactions are complex in nature. In many cases, bacterial killing does not commence immediately after the addition of an antibiotic, and a lag period is observed. Antibiotic permeation and/or the intermediate steps that exist between antibiotic-receptor binding and expression of cell death are two major possible causes for such lag period. This study was primarily designed to determine the relationship, if any, between antibiotic concentrations and the lag periods by a modeling approach. Short-term time-kill studies were conducted for amoxicillin, ampicillin, penicillin-G, oxacillin, and dicloxacillin against Escherichia coli. In conjunction with the use of a saturable rate model to describe the concentration-dependent killing process, a first-order induction (initiation) rate constant was used to characterize the delay in bacterial killing during the lag period. For all of the beta-lactams tested, parameters describing the bactericidal effect suggest that amoxicillin and ampicillin were much more potent than oxacillin and dicloxacillin. The induction rate constant estimates for both ampicillin and amoxicillin were found to relate linearly to concentrations. Nevertheless, these induction rate constant estimates were lower for penicillin-G, oxacillin, and dicloxacillin and increased nonlinearly with concentrations until an apparent plateau was observed. These findings support the hypothesis that the permeation process is potentially a rate-limiting step for the rapid bactericidal beta-lactams such as ampicillin and amoxicillin. However, as suggested by previous observations of the various morphological changes induced by beta-lactams, the contribution of the steps following antibiotic-receptor complex formation to the lag period might be significant for the less bactericidal antibiotics such as oxacillin and dicloxacillin. Findings from the present modeling approach can potentially be used to guide future bench experimentation. PMID:8891135
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.
2013-01-01
A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.
Rani, R Uma; Kumar, S Adish; Kaliappan, S; Yeom, Ick-Tae; Banu, J Rajesh
2014-05-01
High efficiency resource recovery from dairy waste activated sludge (WAS) has been a focus of attention. An investigation into the influence of two step sono-alkalization pretreatment (using different alkaline agents, pH and sonic reaction times) on sludge reduction potential in a semi-continuous anaerobic reactor was performed for the first time in literature. Firstly, effect of sludge pretreatment was evaluated by COD solubilization, suspended solids reduction and biogas production. At optimized condition (4172 kJ/kg TS of supplied energy for NaOH - pH 10), COD solubilization, suspended solids reduction and biogas production was 59%, 46% and 80% higher than control. In order to clearly describe the hydrolysis of waste activated sludge during sono-alkalization pretreatment by a two step process, concentrations of ribonucleic acid (RNA) and bound extracellular polymeric substance (EPS) were also measured. Secondly, semi-continuous process performance was studied in a lab-scale semi-continuous anaerobic reactor (5L), with 4 L working volume. With three operated SRTs, the SRT of 15 d was found to be most appropriate for economic operation of the reactor. Combining pretreatment with anaerobic digestion led to 58% and 62% of suspended solids and volatile solids reduction, respectively, with an improvement of 83% in biogas production. Thus, two step sono-alkalization pretreatment laid the basis in enhancing the anaerobic digestion potential of dairy WAS. Copyright © 2013 Elsevier B.V. All rights reserved.
Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz
2016-09-01
In this work a new metric of surrogacy, the so-called individual causal association (ICA), is introduced using information-theoretic concepts and a causal inference model for a binary surrogate and true endpoint. The ICA has a simple and appealing interpretation in terms of uncertainty reduction and, in some scenarios, it seems to provide a more coherent assessment of the validity of a surrogate than existing measures. The identifiability issues are tackled using a two-step procedure. In the first step, the region of the parametric space of the distribution of the potential outcomes, compatible with the data at hand, is geometrically characterized. Further, in a second step, a Monte Carlo approach is proposed to study the behavior of the ICA on the previous region. The method is illustrated using data from the Collaborative Initial Glaucoma Treatment Study. A newly developed and user-friendly R package Surrogate is provided to carry out the evaluation exercise. © 2016, The International Biometric Society.
Encounter times of chromatin loci influenced by polymer decondensation
NASA Astrophysics Data System (ADS)
Amitai, A.; Holcman, D.
2018-03-01
The time for a DNA sequence to find its homologous counterpart depends on a long random search inside the cell nucleus. Using polymer models, we compute here the mean first encounter time (MFET) between two sites located on two different polymer chains and confined locally by potential wells. We find that reducing tethering forces acting on the polymers results in local decondensation, and numerical simulations of the polymer model show that these changes are associated with a reduction of the MFET by several orders of magnitude. We derive here new asymptotic formula for the MFET, confirmed by Brownian simulations. We conclude from the present modeling approach that the fast search for homology is mediated by a local chromatin decondensation due to the release of multiple chromatin tethering forces. The present scenario could explain how the homologous recombination pathway for double-stranded DNA repair is controlled by its random search step.
Xu, Guojie; Cai, Wei; Gao, Wei; Liu, Chunsheng
2016-10-01
Glycyrrhizin is an important bioactive compound that is used clinically to treat chronic hepatitis and is also used as a sweetener world-wide. However, the key UDP-dependent glucuronosyltransferases (UGATs) involved in the biosynthesis of glycyrrhizin remain unknown. To discover unknown UGATs, we fully annotated potential UGATs from Glycyrrhiza uralensis using deep transcriptome sequencing. The catalytic functions of candidate UGATs were determined by an in vitro enzyme assay. Systematically screening 434 potential UGATs, we unexpectedly found one unique GuUGAT that was able to catalyse the glucuronosylation of glycyrrhetinic acid to directly yield glycyrrhizin via continuous two-step glucuronosylation. Expression analysis further confirmed the key role of GuUGAT in the biosynthesis of glycyrrhizin. Site-directed mutagenesis revealed that Gln-352 may be important for the initial step of glucuronosylation, and His-22, Trp-370, Glu-375 and Gln-392 may be important residues for the second step of glucuronosylation. Notably, the ability of GuUGAT to catalyse a continuous two-step glucuronosylation reaction was determined to be unprecedented among known glycosyltransferases of bioactive plant natural products. Our findings increase the understanding of traditional glycosyltransferases and pave the way for the complete biosynthesis of glycyrrhizin. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
A Pilot Study of Gait Function in Farmworkers in Eastern North Carolina.
Nguyen, Ha T; Kritchevsky, Stephen B; Foxworth, Judy L; Quandt, Sara A; Summers, Phillip; Walker, Francis O; Arcury, Thomas A
2015-01-01
Farmworkers endure many job-related hazards, including fall-related work injuries. Gait analysis may be useful in identifying potential fallers. The goal of this pilot study was to explore differences in gait between farmworkers and non-farmworkers. The sample included 16 farmworkers and 24 non-farmworkers. Gait variables were collected using the portable GAITRite system, a 16-foot computerized walkway. Generalized linear regression models were used to examine group differences. All models were adjusted for two established confounders, age and body mass index. There were no significant differences in stride length, step length, double support time, and base of support; but farmworkers had greater irregularity of stride length (P = .01) and step length (P = .08). Farmworkers performed significantly worse on gait velocity (P = .003) and cadence (P < .001) relative to non-farmworkers. We found differences in gait function between farmworkers and non-farmworkers. These findings suggest that measuring gait with a portable walkway system is feasible and informative in farmworkers and may possibly be of use in assessing fall risk.
Motaghed, M; Mousavi, S M; Rastegar, S O; Shojaosadati, S A
2014-11-01
The present study evaluated the potential of Bacillus megaterium as a cyanogenic bacterium to produce cyanide for solubilization of platinum and rhenium from a spent refinery catalyst. Response surface methodology was applied to study the effects and interaction between two main effective parameters including initial glycine concentration and pulp density. Maximum Pt and Re recovery was obtained 15.7% and 98%, respectively, under optimum conditions of 12.8 g/l initial glycine concentration and 4% (w/v) pulp density after 7 days. Increasing the free cyanide concentration to 3.6 mg/l, varying the pH from 6.7 to 9, and increasing the dissolved oxygen from 2 to 5mg/l demonstrated the growth characteristics of B. megaterium during bioleaching process. The modified shrinking core model was used to determine the rate limiting step of the process. It was found that diffusion through the product layer is the rate controlling step. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimal Elevation and Configuration of Hanford's Double-Shell Tank Waste Mixer Pumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onishi, Yasuo; Yokuda, Satoru T.; Majumder, Catherine A.
The objective of this study was to compare the mixing performance of the Lawrence pump, which has injection nozzles at the top, with an alternative pump that has injection nozzles at the bottom, and to determine the optimal elevation for the alternative pump. Sixteen cases were evaluated: two sludge thicknesses at eight levels. A two-step evaluation approach was used: Step 1 to evaluate all 16 cases with the non-rotating mixer pump model and Step 2 to further evaluate four of those cases with the more realistic rotating mixer pump model. The TEMPEST code was used.
Functional-to-form mapping for assembly design automation
NASA Astrophysics Data System (ADS)
Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.
2017-11-01
Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.
Analysis, design, fabrication, and performance of three-dimensional braided composites
NASA Astrophysics Data System (ADS)
Kostar, Timothy D.
1998-11-01
Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.
Shaffer, David W.; Xie, Yan; Szalda, David J.; ...
2016-11-01
In order to gain a deeper mechanistic understanding of water oxidation by [(bda)Ru(L) 2] catalysts (bdaH 2 = [2,2'-bipyridine]-6,6'-dicarboxylic acid; L = pyridine-type ligand), a series of modified catalysts with one and two trifluoromethyl groups in the 4 position of the bda 2– ligand was synthesized and studied using stopped-flow kinetics. The additional $-$CF 3 groups increased the oxidation potentials for the catalysts and enhanced the rate of electrocatalytic water oxidation at low pH. Stopped-flow measurements of cerium(IV)-driven water oxidation at pH 1 revealed two distinct kinetic regimes depending on catalyst concentration. At relatively high catalyst concentration (ca. ≥10 –4more » M), the rate-determining step (RDS) was a proton-coupled oxidation of the catalyst by cerium(IV) with direct kinetic isotope effects (KIE > 1). At low catalyst concentration (ca. ≤10 –6 M), the RDS was a bimolecular step with k H/k D ≈ 0.8. The results support a catalytic mechanism involving coupling of two catalyst molecules. The rate constants for both RDSs were determined for all six catalysts studied. The presence of $-$CF 3 groups had inverse effects on the two steps, with the oxidation step being fastest for the unsubstituted complexes and the bimolecular step being faster for the most electron-deficient complexes. Finally, though the axial ligands studied here did not significantly affect the oxidation potentials of the catalysts, the nature of the ligand was found to be important not only in the bimolecular step but also in facilitating electron transfer from the metal center to the sacrificial oxidant.« less
On the exact solvability of the anisotropic central spin model: An operator approach
NASA Astrophysics Data System (ADS)
Wu, Ning
2018-07-01
Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.
A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm
NASA Astrophysics Data System (ADS)
Zhuang, Kelin; North, Gerald R.; Stevens, Mark J.
A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land-sea-ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.
Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U
2017-12-01
The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.
Ye, Jianchu; Tu, Song; Sha, Yong
2010-10-01
For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.
Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.
Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng
2018-04-15
This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-07
... model designed to provide correctional agencies with a step-by-step approach to promote systemic change..., evidence-based approaches, evaluate their potential to inform correctional policy and practice, create... outside the corrections field to develop interdisciplinary approaches and draw on professional networks...
Sundaram, Smitha; Kolb, Gunther; Hessel, Volker; Wang, Qi
2017-03-29
Two novel routes for the production of gasoline from pyrolysis oil (from timber pine) and biogas (from ley grass) are simulated, followed by a cradle-to-gate life-cycle assessment of the two production routes. The main aim of this work is to conduct a holistic evaluation of the proposed routes and benchmark them against the conventional route of producing gasoline from natural gas. A previously commercialized method of synthesizing gasoline involves conversion of natural gas to syngas, which is further converted to methanol, and then as a last step, the methanol is converted to gasoline. In the new proposed routes, the syngas production step is different; syngas is produced from a mixture of pyrolysis oil and biogas in the following two ways: (i) autothermal reforming of pyrolysis oil and biogas, in which there are two reactions in one reactor (ATR) and (ii) steam reforming of pyrolysis oil and catalytic partial oxidation of biogas, in which there are separated but thermally coupled reactions and reactors (CR). The other two steps to produce methanol from syngas, and gasoline from methanol, remain the same. The purpose of this simulation is to have an ex-ante comparison of the performance of the new routes against a reference, in terms of energy and sustainability. Thus, at this stage of simulations, nonrigorous, equilibrium-based models have been used for reactors, which will give the best case conversions for each step. For the conventional production route, conversion and yield data available in the literature have been used, wherever available.The results of the process design showed that the second method (separate, but thermally coupled reforming) has a carbon efficiency of 0.53, compared to the conventional route (0.48), as well as the first route (0.40). The life-cycle assessment results revealed that the newly proposed processes have a clear advantage over the conventional process in some categories, particularly the global warming potential and primary energy demand; but there are also some in which the conventional route fares better, such as the human toxicity potential and the categories related to land-use change such as biotic production potential and the groundwater resistance indicator. The results confirmed that even though using biomass such as timber pine as raw material does result in reduced greenhouse gas emissions, the activities associated with biomass, such as cultivation and harvesting, contribute to the environmental footprint, particularly the land use change categories. This gives an impetus to investigate the potential of agricultural, forest, or even food waste, which would be likely to have a substantially lower impact on the environment. Moreover, it could be seen that the source of electricity used in the process has a major impact on the environmental performance.
A new methodology to determine kinetic parameters for one- and two-step chemical models
NASA Technical Reports Server (NTRS)
Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.
1996-01-01
In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.
Nuclear fusion during yeast mating occurs by a three-step pathway.
Melloy, Patricia; Shen, Shu; White, Erin; McIntosh, J Richard; Rose, Mark D
2007-11-19
In Saccharomyces cerevisiae, mating culminates in nuclear fusion to produce a diploid zygote. Two models for nuclear fusion have been proposed: a one-step model in which the outer and inner nuclear membranes and the spindle pole bodies (SPBs) fuse simultaneously and a three-step model in which the three events occur separately. To differentiate between these models, we used electron tomography and time-lapse light microscopy of early stage wild-type zygotes. We observe two distinct SPBs in approximately 80% of zygotes that contain fused nuclei, whereas we only see fused or partially fused SPBs in zygotes in which the site of nuclear envelope (NE) fusion is already dilated. This demonstrates that SPB fusion occurs after NE fusion. Time-lapse microscopy of zygotes containing fluorescent protein tags that localize to either the NE lumen or the nucleoplasm demonstrates that outer membrane fusion precedes inner membrane fusion. We conclude that nuclear fusion occurs by a three-step pathway.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
Plummer, Scott M; Plummer, Mark A; Merkel, Patricia A; Hagen, Moira; Biddle, Jennifer F; Waidner, Lisa A
2016-11-01
Hydrogenases are enzymes that play a key role in controlling excess reducing equivalents in both photosynthetic and anaerobic organisms. This enzyme is viewed as potentially important for the industrial generation of hydrogen gas; however, insufficient hydrogen production has impeded its use in a commercial process. Here, we explore the potential to circumvent this problem by directly evolving the Fe-Fe hydrogenase genes from two species of Clostridia bacteria. In addition, a computational model based on these mutant sequences was developed and used as a predictive aid for the isolation of enzymes with even greater efficiency in hydrogen production. Two of the improved mutants have a logarithmic increase in hydrogen production in our in vitro assay. Furthermore, the model predicts hydrogenase sequences with hydrogen productions as high as 540-fold over the positive control. Taken together, these results demonstrate the potential of directed evolution to improve the native bacterial hydrogenases as a first step for improvement of hydrogenase activity, further in silico prediction, and finally, construction and demonstration of an improved algal hydrogenase in an in vivo assay of C. reinhardtii hydrogen production. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.
2016-04-08
Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Renjie; Evans, James W.; Oliveira, Tiago J.
Here, a discrete version of deposition-diffusion equations appropriate for description of step flow on a vicinal surface is analyzed for a two-dimensional grid of adsorption sites representing the stepped surface and explicitly incorporating kinks along the step edges. Model energetics and kinetics appropriately account for binding of adatoms at steps and kinks, distinct terrace and edge diffusion rates, and possible additional barriers for attachment to steps. Analysis of adatom attachment fluxes as well as limiting values of adatom densities at step edges for nonuniform deposition scenarios allows determination of both permeability and kinetic coefficients. Behavior of these quantities is assessedmore » as a function of key system parameters including kink density, step attachment barriers, and the step edge diffusion rate.« less
NASA Astrophysics Data System (ADS)
Rosero-Vlasova, O.; Borini Alves, D.; Vlassova, L.; Perez-Cabello, F.; Montorio Lloveria, R.
2017-10-01
Deforestation in Amazon basin due, among other factors, to frequent wildfires demands continuous post-fire monitoring of soil and vegetation. Thus, the study posed two objectives: (1) evaluate the capacity of Visible - Near InfraRed - ShortWave InfraRed (VIS-NIR-SWIR) spectroscopy to estimate soil organic matter (SOM) in fire-affected soils, and (2) assess the feasibility of SOM mapping from satellite images. For this purpose, 30 soil samples (surface layer) were collected in 2016 in areas of grass and riparian vegetation of Campos Amazonicos National Park, Brazil, repeatedly affected by wildfires. Standard laboratory procedures were applied to determine SOM. Reflectance spectra of soils were obtained in controlled laboratory conditions using Fieldspec4 spectroradiometer (spectral range 350nm- 2500nm). Measured spectra were resampled to simulate reflectances for Landsat-8, Sentinel-2 and EnMap spectral bands, used as predictors in SOM models developed using Partial Least Squares regression and step-down variable selection algorithm (PLSR-SD). The best fit was achieved with models based on reflectances simulated for EnMap bands (R2=0.93; R2cv=0.82 and NMSE=0.07; NMSEcv=0.19). The model uses only 8 out of 244 predictors (bands) chosen by the step-down variable selection algorithm. The least reliable estimates (R2=0.55 and R2cv=0.40 and NMSE=0.43; NMSEcv=0.60) resulted from Landsat model, while Sentinel-2 model showed R2=0.68 and R2cv=0.63; NMSE=0.31 and NMSEcv=0.38. The results confirm high potential of VIS-NIR-SWIR spectroscopy for SOM estimation. Application of step-down produces sparser and better-fit models. Finally, SOM can be estimated with an acceptable accuracy (NMSE 0.35) from EnMap and Sentinel-2 data enabling mapping and analysis of impacts of repeated wildfires on soils in the study area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irene Farnham and Sam Marutzky
2011-07-01
This CADD/CAP follows the Corrective Action Investigation (CAI) stage, which results in development of a set of contaminant boundary forecasts produced from groundwater flow and contaminant transport modeling of the Frenchman Flat CAU. The Frenchman Flat CAU is located in the southeastern portion of the NNSS and comprises 10 underground nuclear tests. The tests were conducted between 1965 and 1971 and resulted in the release of radionuclides in the subsurface in the vicinity of the test cavities. Two important aspects of the corrective action process are presented within this CADD/CAP. The CADD portion describes the results of the Frenchman Flatmore » CAU data-collection and modeling activities completed during the CAI stage. The corrective action objectives and the actions recommended to meet the objectives are also described. The CAP portion describes the corrective action implementation plan. The CAP begins with the presentation of CAU regulatory boundary objectives and initial use restriction boundaries that are identified and negotiated by NNSA/NSO and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The first two stages of the strategy have been completed for the Frenchman Flat CAU. A value of information analysis and a CAIP were developed during the CAIP stage. During the CAI stage, a CAIP addendum was developed, and the activities proposed in the CAIP and addendum were completed. These activities included hydrogeologic investigation of the underground testing areas, aquifer testing, isotopic and geochemistry-based investigations, and integrated geophysical investigations. After these investigations, a groundwater flow and contaminant transport model was developed to forecast contaminant boundaries that enclose areas potentially exceeding the Safe Drinking Water Act radiological standards at any time within 1,000 years. An external peer review of the groundwater flow and contaminant transport model was completed, and the model was accepted by NDEP to allow advancement to the CADD/CAP stage. The CADD/CAP stage focuses on model evaluation to ensure that existing models provide adequate guidance for the regulatory decisions regarding monitoring and institutional controls. Data-collection activities are identified and implemented to address key uncertainties in the flow and contaminant transport models. During the CR stage, final use restriction boundaries and CAU regulatory boundaries are negotiated and established; a long-term closure monitoring program is developed and implemented; and the approaches and policies for institutional controls are initiated. The model evaluation process described in this plan consists of an iterative series of five steps designed to build confidence in the site conceptual model and model forecasts. These steps are designed to identify data-collection activities (Step 1), document the data-collection activities in the 0CADD/CAP (Step 2), and perform the activities (Step 3). The new data are then assessed; the model is refined, if necessary; the modeling results are evaluated; and a model evaluation report is prepared (Step 4). The assessments are made by the modeling team and presented to the pre-emptive review committee. The decision is made by the modeling team with the assistance of the pre-emptive review committee and concurrence of NNSA/NSO to continue data and model assessment/refinement, recommend additional data collection, or recommend advancing to the CR stage. A recommendation to advance to the CR stage is based on whether the model is considered to be sufficiently reliable for designing a monitoring system and developing effective institutional controls. The decision to advance to the CR stage or to return to step 1 of the process is then made by NDEP (Step 5).« less
Minimum stiffness criteria for ring frame stiffeners of space launch vehicles
NASA Astrophysics Data System (ADS)
Friedrich, Linus; Schröder, Kai-Uwe
2016-12-01
Frame stringer-stiffened shell structures show high load carrying capacity in conjunction with low structural mass and are for this reason frequently used as primary structures of aerospace applications. Due to the great number of design variables, deriving suitable stiffening configurations is a demanding task and needs to be realized using efficient analysis methods. The structural design of ring frame stringer-stiffened shells can be subdivided into two steps. One, the design of a shell section between two ring frames. Two, the structural design of the ring frames such that a general instability mode is avoided. For sizing stringer-stiffened shell sections, several methods were recently developed, but existing ring frame sizing methods are mainly based on empirical relations or on smeared models. These methods do not mandatorily lead to reliable designs and in some cases the lightweight design potential of stiffened shell structures can thus not be exploited. In this paper, the explicit physical behaviour of ring frame stiffeners of space launch vehicles at the onset of panel instability is described using mechanical substitute models. Ring frame stiffeners of a stiffened shell structure are sized applying existing methods and the method suggested in this paper. To verify the suggested method and to demonstrate its potential, geometrically non-linear finite element analyses are performed using detailed finite element models.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance
NASA Astrophysics Data System (ADS)
Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola
2013-04-01
Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.
The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input
ERIC Educational Resources Information Center
Senner, Jill E.; Baud, Matthew R.
2017-01-01
An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…
An index-based robust decision making framework for watershed management in a changing climate.
Kim, Yeonjoo; Chung, Eun-Sung
2014-03-01
This study developed an index-based robust decision making framework for watershed management dealing with water quantity and quality issues in a changing climate. It consists of two parts of management alternative development and analysis. The first part for alternative development consists of six steps: 1) to understand the watershed components and process using HSPF model, 2) to identify the spatial vulnerability ranking using two indices: potential streamflow depletion (PSD) and potential water quality deterioration (PWQD), 3) to quantify the residents' preferences on water management demands and calculate the watershed evaluation index which is the weighted combinations of PSD and PWQD, 4) to set the quantitative targets for water quantity and quality, 5) to develop a list of feasible alternatives and 6) to eliminate the unacceptable alternatives. The second part for alternative analysis has three steps: 7) to analyze all selected alternatives with a hydrologic simulation model considering various climate change scenarios, 8) to quantify the alternative evaluation index including social and hydrologic criteria with utilizing multi-criteria decision analysis methods and 9) to prioritize all options based on a minimax regret strategy for robust decision. This framework considers the uncertainty inherent in climate models and climate change scenarios with utilizing the minimax regret strategy, a decision making strategy under deep uncertainty and thus this procedure derives the robust prioritization based on the multiple utilities of alternatives from various scenarios. In this study, the proposed procedure was applied to the Korean urban watershed, which has suffered from streamflow depletion and water quality deterioration. Our application shows that the framework provides a useful watershed management tool for incorporating quantitative and qualitative information into the evaluation of various policies with regard to water resource planning and management. Copyright © 2013 Elsevier B.V. All rights reserved.
Amano, Ken-Ichi; Yoshidome, Takashi; Iwaki, Mitsuhiro; Suzuki, Makoto; Kinoshita, Masahiro
2010-07-28
We report a new progress in elucidating the mechanism of the unidirectional movement of a linear-motor protein (e.g., myosin) along a filament (e.g., F-actin). The basic concept emphasized here is that a potential field is entropically formed for the protein on the filament immersed in solvent due to the effect of the translational displacement of solvent molecules. The entropic potential field is strongly dependent on geometric features of the protein and the filament, their overall shapes as well as details of the polyatomic structures. The features and the corresponding field are judiciously adjusted by the binding of adenosine triphosphate (ATP) to the protein, hydrolysis of ATP into adenosine diphosphate (ADP)+Pi, and release of Pi and ADP. As the first step, we propose the following physical picture: The potential field formed along the filament for the protein without the binding of ATP or ADP+Pi to it is largely different from that for the protein with the binding, and the directed movement is realized by repeated switches from one of the fields to the other. To illustrate the picture, we analyze the spatial distribution of the entropic potential between a large solute and a large body using the three-dimensional integral equation theory. The solute is modeled as a large hard sphere. Two model filaments are considered as the body: model 1 is a set of one-dimensionally connected large hard spheres and model 2 is a double helical structure formed by two sets of connected large hard spheres. The solute and the filament are immersed in small hard spheres forming the solvent. The major findings are as follows. The solute is strongly confined within a narrow space in contact with the filament. Within the space there are locations with sharply deep local potential minima along the filament, and the distance between two adjacent locations is equal to the diameter of the large spheres constituting the filament. The potential minima form a ringlike domain in model 1 while they form a pointlike one in model 2. We then examine the effects of geometric features of the solute on the amplitudes and asymmetry of the entropic potential field acting on the solute along the filament. A large aspherical solute with a cleft near the solute-filament interface, which mimics the myosin motor domain, is considered in the examination. Thus, the two fields in our physical picture described above are qualitatively reproduced. The factors to be taken into account in further studies are also discussed.
A technique for evaluating black-footed ferret habitat
Biggins, Dean E.; Miller, Brian J.; Hanebury, Louis R.; Oakleaf, Bob; Farmer, Adrian H.; Crete, Ron; Dood, Arnold
1993-01-01
In this paper, we provide a model and step-by-step procedures for rating a prairie dog (Cynomys sp.) complex for the reintroduction of black-footed ferrets (Mustela nigripes). An important factor in the model is an estimate of the number of black-footed ferret families a prairie dog complex can support for a year; thus, the procedures prescribe how to estimate the size of a prairie dog complex and the density of prairie dogs. Other attributes of the model are qualitative: arrangement of colonies, potential for plague and canine distemper, potential for prairie dog expansion, abundance of predators, future resource conflicts and ownership stability, and public and landowner attitudes about prairie dogs and black-footed ferrets. Because of the qualitative attributes in the model, a team approach is recommended for ranking complexes of prairie dogs for black-footed ferret reintroduction.
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Recent developments in the hammerhead ribozyme field.
Vaish, N K; Kore, A R; Eckstein, F
1998-01-01
Developments in the hammerhead ribozyme field during the last two years are reviewed here. New results on the specificity of this ribozyme, the mechanism of its action and on the question of metal ion involvement in the cleavage reaction are discussed. To demonstrate the potential of ribozyme technology examples of the application of this ribozyme for the inhibition of gene expression in cell culture, in animals, as well as in plant models are presented. Particular emphasis is given to critical steps in the approach, including RNA site selection, delivery, vector development and cassette construction. PMID:9826743
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Choi, Woo June; Pepple, Kathryn L; Wang, Ruikang K
2018-05-24
In preclinical vision research, cell grading in small animal models is essential for the quantitative evaluation of intraocular inflammation. Here, we present a new and practical optical coherence tomography (OCT) image analysis method for the automated detection and counting of aqueous cells in the anterior chamber (AC) of a rodent model of uveitis. Anterior segment OCT (AS-OCT) images are acquired with a 100kHz swept-source OCT (SS-OCT) system. The proposed method consists of two steps. In the first step, we first despeckle and binarize each OCT image. After removing AS structures in the binary image, we then apply area thresholding to isolate cell-like objects. Potential cell candidates are selected based on their best fit to roundness. The second step performs the cell counting within the whole AC, in which additional cell tracking analysis is conducted on the successive OCT images to eliminate redundancy in cell counting. Finally, 3-D cell grading using the proposed method is demonstrated in longitudinal OCT imaging of a mouse model of anterior uveitis in vivo. Rendering of anterior segment (orange) of mouse eye and automatically counted anterior chamber cells (green). Inset is a top view of the rendering, showing the cell distribution across the anterior chamber. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Using a contextualized sensemaking model for interaction design: A case study of tumor contouring.
Aselmaa, Anet; van Herk, Marcel; Laprie, Anne; Nestle, Ursula; Götz, Irina; Wiedenmann, Nicole; Schimek-Jasch, Tanja; Picaud, Francois; Syrykh, Charlotte; Cagetti, Leonel V; Jolnerovski, Maria; Song, Yu; Goossens, Richard H M
2017-01-01
Sensemaking theories help designers understand the cognitive processes of a user when he/she performs a complicated task. This paper introduces a two-step approach of incorporating sensemaking support within the design of health information systems by: (1) modeling the sensemaking process of physicians while performing a task, and (2) identifying software interaction design requirements that support sensemaking based on this model. The two-step approach is presented based on a case study of the tumor contouring clinical task for radiotherapy planning. In the first step of the approach, a contextualized sensemaking model was developed to describe the sensemaking process based on the goal, the workflow and the context of the task. In the second step, based on a research software prototype, an experiment was conducted where three contouring tasks were performed by eight physicians respectively. Four types of navigation interactions and five types of interaction sequence patterns were identified by analyzing the gathered interaction log data from those twenty-four cases. Further in-depth study on each of the navigation interactions and interaction sequence patterns in relation to the contextualized sensemaking model revealed five main areas for design improvements to increase sensemaking support. Outcomes of the case study indicate that the proposed two-step approach was beneficial for gaining a deeper understanding of the sensemaking process during the task, as well as for identifying design requirements for better sensemaking support. Copyright © 2016. Published by Elsevier Inc.
Wentzel, Jobke; Sanderman, Robbert; van Gemert-Pijnen, Lisette
2015-01-01
Background It is acknowledged that the success and uptake of eHealth improve with the involvement of users and stakeholders to make technology reflect their needs. Involving stakeholders in implementation research is thus a crucial element in developing eHealth technology. Business modeling is an approach to guide implementation research for eHealth. Stakeholders are involved in business modeling by identifying relevant stakeholders, conducting value co-creation dialogs, and co-creating a business model. Because implementation activities are often underestimated as a crucial step while developing eHealth, comprehensive and applicable approaches geared toward business modeling in eHealth are scarce. Objective This paper demonstrates the potential of several stakeholder-oriented analysis methods and their practical application was demonstrated using Infectionmanager as an example case. In this paper, we aim to demonstrate how business modeling, with the focus on stakeholder involvement, is used to co-create an eHealth implementation. Methods We divided business modeling in 4 main research steps. As part of stakeholder identification, we performed literature scans, expert recommendations, and snowball sampling (Step 1). For stakeholder analyzes, we performed “basic stakeholder analysis,” stakeholder salience, and ranking/analytic hierarchy process (Step 2). For value co-creation dialogs, we performed a process analysis and stakeholder interviews based on the business model canvas (Step 3). Finally, for business model generation, we combined all findings into the business model canvas (Step 4). Results Based on the applied methods, we synthesized a step-by-step guide for business modeling with stakeholder-oriented analysis methods that we consider suitable for implementing eHealth. Conclusions The step-by-step guide for business modeling with stakeholder involvement enables eHealth researchers to apply a systematic and multidisciplinary, co-creative approach for implementing eHealth. Business modeling becomes an active part in the entire development process of eHealth and starts an early focus on implementation, in which stakeholders help to co-create the basis necessary for a satisfying success and uptake of the eHealth technology. PMID:26272510
Li, Min; Zhang, John Z H
2017-02-14
A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.
Protein simulation using coarse-grained two-bead multipole force field with polarizable water models
NASA Astrophysics Data System (ADS)
Li, Min; Zhang, John Z. H.
2017-02-01
A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.
CONSTRUCTION OF EDUCATIONAL THEORY MODELS.
ERIC Educational Resources Information Center
MACCIA, ELIZABETH S.; AND OTHERS
THIS STUDY DELINEATED MODELS WHICH HAVE POTENTIAL USE IN GENERATING EDUCATIONAL THEORY. A THEORY MODELS METHOD WAS FORMULATED. BY SELECTING AND ORDERING CONCEPTS FROM OTHER DISCIPLINES, THE INVESTIGATORS FORMULATED SEVEN THEORY MODELS. THE FINAL STEP OF DEVISING EDUCATIONAL THEORY FROM THE THEORY MODELS WAS PERFORMED ONLY TO THE EXTENT REQUIRED TO…
Free energy landscape of protein-like chains with discontinuous potentials
NASA Astrophysics Data System (ADS)
Movahed, Hanif Bayat; van Zon, Ramses; Schofield, Jeremy
2012-06-01
In this article the configurational space of two simple protein models consisting of polymers composed of a periodic sequence of four different kinds of monomers is studied as a function of temperature. In the protein models, hydrogen bond interactions, electrostatic repulsion, and covalent bond vibrations are modeled by discontinuous step, shoulder, and square-well potentials, respectively. The protein-like chains exhibit a secondary alpha helix structure in their folded states at low temperatures, and allow a natural definition of a configuration by considering which beads are bonded. Free energies and entropies of configurations are computed using the parallel tempering method in combination with hybrid Monte Carlo sampling of the canonical ensemble of the discontinuous potential system. The probability of observing the most common configuration is used to analyze the nature of the free energy landscape, and it is found that the model with the least number of possible bonds exhibits a funnel-like free energy landscape at low enough temperature for chains with fewer than 30 beads. For longer proteins, the free landscape consists of several minima, where the configuration with the lowest free energy changes significantly by lowering the temperature and the probability of observing the most common configuration never approaches one due to the degeneracy of the lowest accessible potential energy.
Modeling behavior dynamics using computational psychometrics within virtual worlds.
Cipresso, Pietro
2015-01-01
In case of fire in a building, how will people behave in the crowd? The behavior of each individual affects the behavior of others and, conversely, each one behaves considering the crowd as a whole and the individual others. In this article, I propose a three-step method to explore a brand new way to study behavior dynamics. The first step relies on the creation of specific situations with standard techniques (such as mental imagery, text, video, and audio) and an advanced technique [Virtual Reality (VR)] to manipulate experimental settings. The second step concerns the measurement of behavior in one, two, or many individuals focusing on parameters extractions to provide information about the behavior dynamics. Finally, the third step, which uses the parameters collected and measured in the previous two steps in order to simulate possible scenarios to forecast through computational models, understand, and explain behavior dynamics at the social level. An experimental study was also included to demonstrate the three-step method and a possible scenario.
NASA Technical Reports Server (NTRS)
Ko, Sung HO
1993-01-01
Separation and reattachment of turbulent shear layers is observed in many important engineering applications, yet it is poorly understood. This has motivated many studies on understanding and predicting the processes of separation and reattachment of turbulent shear layers. Both of the situations in which separation is induced by adverse pressure gradient, or by discontinuities of geometry, have attracted attention of turbulence model developers. Formulation of turbulence closure models to describe the essential features of separated turbulent flows accurately is still a formidable task. Computations of separated flows associated with sharp-edged bluff bodies are described. For the past two decades, the backward-facing step flow, the simplest separated flow, has been a popular test case for turbulence models. Detailed studies on the performance of many turbulence models, including two equation turbulence models and Reynolds stress models, for flows over steps can be found in the papers by Thangam & Speziale and Lasher & Taulbee). These studies indicate that almost all the existing turbulence models fail to accurately predict many important features of back step flow such as reattachment length, recovery rate of the redeveloping boundary layers downstream of the reattachment point, streamlines near the reattachment point, and the skin friction coefficient. The main objectives are to calculate flows over backward and forward-facing steps using the NRSM and to make use of the newest DNS data for detailed comparison. This will give insights for possible improvements of the turbulence model.
Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology
NASA Astrophysics Data System (ADS)
Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang
2018-03-01
In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.
A novel approach to model the transient behavior of solid-oxide fuel cell stacks
NASA Astrophysics Data System (ADS)
Menon, Vikram; Janardhanan, Vinod M.; Tischer, Steffen; Deutschmann, Olaf
2012-09-01
This paper presents a novel approach to model the transient behavior of solid-oxide fuel cell (SOFC) stacks in two and three dimensions. A hierarchical model is developed by decoupling the temperature of the solid phase from the fluid phase. The solution of the temperature field is considered as an elliptic problem, while each channel within the stack is modeled as a marching problem. This paper presents the numerical model and cluster algorithm for coupling between the solid phase and fluid phase. For demonstration purposes, results are presented for a stack operated on pre-reformed hydrocarbon fuel. Transient response to load changes is studied by introducing step changes in cell potential and current. Furthermore, the effect of boundary conditions and stack materials on response time and internal temperature distribution is investigated.
Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.
Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda
2011-03-15
We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.
NASA Astrophysics Data System (ADS)
Pandey, Praveen K.; Sharma, Kriti; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2003-11-01
CdTe quantum dots embedded in glass matrix are grown using two-step annealing method. The results for the optical transmission characterization are analysed and compared with the results obtained from CdTe quantum dots grown using conventional single-step annealing method. A theoretical model for the absorption spectra is used to quantitatively estimate the size dispersion in the two cases. In the present work, it is established that the quantum dots grown using two-step annealing method have stronger quantum confinement, reduced size dispersion and higher volume ratio as compared to the single-step annealed samples. (
Constrained motion model of mobile robots and its applications.
Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong
2009-06-01
Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.
Self-consistent-field study of conduction through conjugated molecules
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Stafström, Sven
2001-07-01
Current-voltage (I-V) characteristics of individual molecules connected by metallic leads are studied theoretically. Using the Pariser-Parr-Pople quantum chemical method to model the molecule enables us to include electron-electron interactions in the Hartree approximation. The self-consistent-field method is used to calculate charging together with other properties for the total system under bias. Thereafter the Landauer formula is used to calculate the current from the transmission amplitudes. The most important parameter to understand charging is the position of the chemical potentials of the leads in relation to the molecular levels. At finite bias, the main part of the potential drop is located at the molecule-lead junctions. Also, the potential of the molecule is shown to partially follow the chemical potential closest to the highest occupied molecular orbital (HOMO). Therefore, the resonant tunneling steps in the I-V curves are smoothed giving a I-V resembling a ``Coulomb-gap.'' However, the charge of the molecule is not quantized since the molecule is small with quite strong interactions with the leads. The calculations predict an increase in the current at the bias corresponding to the energy gap of the molecule irrespective of the metals used in the leads. When the bias is increased further, charge is redistributed from the HOMO level to the lowest unoccupied molecular orbital of the molecule. This gives a step in the I-V curves and a corresponding change in the potential profile over the molecule. Calculations were mainly performed on polyene molecules. Molecules asymmetrically coupled to the leads model the I-V curves for molecules contacted by a scanning tunneling microscopy tip. I-V curves for pentapyrrole and another molecule that show negative differential conductance are also analyzed. The charging of these two systems depends on the shape of the molecular wave functions.
Initial Crisis Reaction and Poliheuristic Theory
ERIC Educational Resources Information Center
DeRouen, Karl, Jr.; Sprecher, Christopher
2004-01-01
Poliheuristic (PH) theory models foreign policy decisions using a two-stage process. The first step eliminates alternatives on the basis of a simplifying heuristic. The second step involves a selection from among the remaining alternatives and can employ a more rational and compensatory means of processing information. The PH model posits that…
ERIC Educational Resources Information Center
Kohnke, Lucas
2011-01-01
This article describes two lesson plans based on the theme: "Your Country," developed using Gilly Salmon's 5 step model in creating e-tivities. The lesson plan model contains five steps that include: (1) Access and motivation where learners will gain experience in using technology, relevant and authentic tasks which will provide explicit…
Nordgreen, Tine; Haug, Thomas; Öst, Lars-Göran; Andersson, Gerhard; Carlbring, Per; Kvale, Gerd; Tangen, Tone; Heiervang, Einar; Havik, Odd E
2016-03-01
The aim of this study was to assess the effectiveness of a cognitive behavioral therapy (CBT) stepped care model (psychoeducation, guided Internet treatment, and face-to-face CBT) compared with direct face-to-face (FtF) CBT. Patients with panic disorder or social anxiety disorder were randomized to either stepped care (n=85) or direct FtF CBT (n=88). Recovery was defined as meeting two of the following three criteria: loss of diagnosis, below cut-off for self-reported symptoms, and functional improvement. No significant differences in intention-to-treat recovery rates were identified between stepped care (40.0%) and direct FtF CBT (43.2%). The majority of the patients who recovered in the stepped care did so at the less therapist-demanding steps (26/34, 76.5%). Moderate to large within-groups effect sizes were identified at posttreatment and 1-year follow-up. The attrition rates were high: 41.2% in the stepped care condition and 27.3% in the direct FtF CBT condition. These findings indicate that the outcome of a stepped care model for anxiety disorders is comparable to that of direct FtF CBT. The rates of improvement at the two less therapist-demanding steps indicate that stepped care models might be useful for increasing patients' access to evidence-based psychological treatments for anxiety disorders. However, attrition in the stepped care condition was high, and research regarding the factors that can improve adherence should be prioritized. Copyright © 2015. Published by Elsevier Ltd.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Regularized wave equation migration for imaging and data reconstruction
NASA Astrophysics Data System (ADS)
Kaplan, Sam T.
The reflection seismic experiment results in a measurement (reflection seismic data) of the seismic wavefield. The linear Born approximation to the seismic wavefield leads to a forward modelling operator that we use to approximate reflection seismic data in terms of a scattering potential. We consider approximations to the scattering potential using two methods: the adjoint of the forward modelling operator (migration), and regularized numerical inversion using the forward and adjoint operators. We implement two parameterizations of the forward modelling and migration operators: source-receiver and shot-profile. For both parameterizations, we find requisite Green's function using the split-step approximation. We first develop the forward modelling operator, and then find the adjoint (migration) operator by recognizing a Fredholm integral equation of the first kind. The resulting numerical system is generally under-determined, requiring prior information to find a solution. In source-receiver migration, the parameterization of the scattering potential is understood using the migration imaging condition, and this encourages us to apply sparse prior models to the scattering potential. To that end, we use both a Cauchy prior and a mixed Cauchy-Gaussian prior, finding better resolved estimates of the scattering potential than are given by the adjoint. In shot-profile migration, the parameterization of the scattering potential has its redundancy in multiple active energy sources (i.e. shots). We find that a smallest model regularized inverse representation of the scattering potential gives a more resolved picture of the earth, as compared to the simpler adjoint representation. The shot-profile parameterization allows us to introduce a joint inversion to further improve the estimate of the scattering potential. Moreover, it allows us to introduce a novel data reconstruction algorithm so that limited data can be interpolated/extrapolated. The linearized operators are expensive, encouraging their parallel implementation. For the source-receiver parameterization of the scattering potential this parallelization is non-trivial. Seismic data is typically corrupted by various types of noise. Sparse coding can be used to suppress noise prior to migration. It is a method that stems from information theory and that we apply to noise suppression in seismic data.
Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover
NASA Technical Reports Server (NTRS)
Dangelo, K. R.
1974-01-01
A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.
A Two-Step Approach to Analyze Satisfaction Data
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.
2011-01-01
In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agostini, Federica; Abedi, Ali; Suzuki, Yasumitsu
The decomposition of electronic and nuclear motion presented in Abedi et al. [Phys. Rev. Lett. 105, 123002 (2010)] yields a time-dependent potential that drives the nuclear motion and fully accounts for the coupling to the electronic subsystem. Here, we show that propagation of an ensemble of independent classical nuclear trajectories on this exact potential yields dynamics that are essentially indistinguishable from the exact quantum dynamics for a model non-adiabatic charge transfer problem. We point out the importance of step and bump features in the exact potential that are critical in obtaining the correct splitting of the quasiclassical nuclear wave packetmore » in space after it passes through an avoided crossing between two Born-Oppenheimer surfaces and analyze their structure. Finally, an analysis of the exact potentials in the context of trajectory surface hopping is presented, including preliminary investigations of velocity-adjustment and the force-induced decoherence effect.« less
Towards the Paris Agreement - negative emission and what Korea can contribute
NASA Astrophysics Data System (ADS)
Kraxner, Florian; Leduc, Sylvain; Lee, Woo Kyun; Son, Yowhan; Kindermann, Georg; Patrizio, Piera; Mesfun, Sennai; Yowargana, Ping; Mac Dowall, Niall; Yamagata, Yoshiki; Shvidenko, Anatoly; Schepaschenko, Dmitry; Aoki, Kentaro
2017-04-01
Energy from fossil fuel comprises more than 80% of the total energy consumption in Korea. While aiming at ambitious renewable energy targets, Korea is also investigating the option of carbon capture and storage (CCS) - especially applied to emissions from the conversion of coal to energy. Two CCS pilot plants linked to existing coal plants are in the pipeline - one in the Gangwon Province (north east Korea) and another one in Chungnam Province (in the west of Korea). The final target is the capturing of one million tons of CO2 per year. The best storage options for CO2 have been identified offshore Korea, with the Ulleung Basin, off the east coast of Korea, considered to feature the greatest potential. Kunsan Basin, off the west coast, is considered another optional site. The objective of this study is to analyze Koreas's negative emissions potential through BECCS (bioenergy combined with CCS) created under the assumption that the two CCS pilot plants were retrofit for cofiring biomass from sustainable domestic forest management and coal. Various scenarios include inter-alia additional green field plants for BECCS. In a first step, national and global biophysical forest models (e.g. G4M) are applied to estimate sustainable biomass availability. In a second step, the results from these forest models are used as input data to the engineering model BeWhere. This techno-engineering model optimizes scaling and location of greenfield heat and power plants (CHP) and related feedstock and CO2 transport logistics. The geographically explicit locations and capacities obtained for forest-based bioenergy plants are then overlaid with a geological suitability map for in-situ carbon storage which can be further combined with off-shore storage options. From this, a theoretical potential for BECCS in Korea is derived. Results indicate that, given the abundant forest cover in South Korea, there is substantial potential for bioenergy production, which could contribute not only to substituting emissions from fossil fuels but also to meeting the targets of the country's commitments under any climate change mitigation agreement. However, the BECCS potential varies with the assumptions underlying the different scenarios. Largest potentials can be identified in a combination of retrofitted coal plants with greenfield bioenergy plants favoring off-shore CO2 storage over on-shore in-situ storage. The technical assessment is used to support a policy discussion on the suitability of BECCS as a mitigation tool in Korea.
NASA Astrophysics Data System (ADS)
Nisar, Ubaid Ahmed; Ashraf, Waqas; Qamar, Shamsul
2016-08-01
Numerical solutions of the hydrodynamical model of semiconductor devices are presented in one and two-space dimension. The model describes the charge transport in semiconductor devices. Mathematically, the models can be written as a convection-diffusion type system with a right hand side describing the relaxation effects and interaction with a self consistent electric field. The proposed numerical scheme is a splitting scheme based on the conservation element and solution element (CE/SE) method for hyperbolic step, and a semi-implicit scheme for the relaxation step. The numerical results of the suggested scheme are compared with the splitting scheme based on Nessyahu-Tadmor (NT) central scheme for convection step and the same semi-implicit scheme for the relaxation step. The effects of various parameters such as low field mobility, device length, lattice temperature and voltages for one-space dimensional hydrodynamic model are explored to further validate the generic applicability of the CE/SE method for the current model equations. A two dimensional simulation is also performed by CE/SE method for a MESFET device, producing results in good agreement with those obtained by NT-central scheme.
Step voltage analysis for the catenoid lightning protection system
NASA Technical Reports Server (NTRS)
Chai, J. C.; Briet, R.; Barker, D. L.; Eley, H. E.
1991-01-01
The main objective of the proposed overhead Catenoid Lightning Protection System (CLPS) is personnel safety. To ensure working personnel's safety in lightning situations, it is necessary that the potential difference developed across a distance equal to a person's pace (step voltage) does not exceed a separately established safe voltage in order to avoid electrocution (ventricular fibrillation) of humans. Therefore, the first stage of the analytical effort is to calculate the open circuit step voltage. An impedance model is developed for this purpose. It takes into consideration the earth's complex impedance behavior and the transient nature of the lightning phenomenon. In the low frequency limit, this impedance model is shown to reduce to results similar to those predicted by the conventional resistor model in a DC analysis.
Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Dey, Bishwajyoti
2017-04-01
We study impurity mediated vortex lattice melting in a rotating two-dimensional Bose-Einstein condensate (BEC). Impurities are introduced either through a protocol in which vortex lattice is produced in an impurity potential or first creating the vortex lattice in the absence of random pinning and then cranking up the impurity potential. These two protocols have obvious relation with the two commonly known protocols of creating vortex lattice in a type-II superconductor: zero field cooling protocol and the field cooling protocol respectively. Time-splitting Crank-Nicolson method has been used to numerically simulate the vortex lattice dynamics. It is shown that the vortex lattice follows a two-step melting via loss of positional and orientational order. This vortex lattice melting process in BEC closely mimics the recently observed two-step melting of vortex matter in weakly pinned type-II superconductor Co-intercalated NbSe2. Also, using numerical perturbation analysis, we compare between the states obtained in two protocols and show that the vortex lattice states are metastable and more disordered when impurities are introduced after the formation of an ordered vortex lattice. The author would like to thank SERB, Govt. of India and BCUD-SPPU for financial support through research Grants.
Realpe, Alba; Adams, Ann; Wall, Peter; Griffin, Damian; Donovan, Jenny L
2016-08-01
How a randomized controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part or not. Ninety-two consultations with 60 new patients were recorded and analyzed during a pilot RCT comparing surgical and nonsurgical interventions for hip impingement. Recordings were analyzed using techniques of thematic analysis and focused conversation analysis. Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are as follows: (1) explain the condition, (2) reassure patients about receiving treatment, (3) establish uncertainty, (4) explain the study purpose, (5) give a balanced view of treatments, and (6) Explain study procedures. There are also two elements throughout the consultation: (1) responding to patients' concerns and (2) showing confidence. The pilot study was successful, with 70% (n = 60) of patients approached across nine centers agreeing to take part in the RCT, so that the full-scale trial was funded. The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Costello, George R; Cummings, Robert L; Sinnette, John T , Jr
1952-01-01
A detailed step-by-step computational outline is presented for the design of two-dimensional cascade blades having a prescribed velocity distribution on the blade in a potential flow of the usual compressible fluid. The outline is based on the assumption that the magnitude of the velocity in the flow of the usual compressible nonviscous fluid is proportional to the magnitude of the velocity in the flow of a compressible nonviscous fluid with linear pressure-volume relation.
Simulation of clustering and anisotropy due to Co step-edge segregation in vapor-deposited CoPt3
NASA Astrophysics Data System (ADS)
Maranville, B. B.; Schuerman, M.; Hellman, F.
2006-03-01
An atomistic mechanism is proposed for the creation of structural anisotropy and consequent large perpendicular magnetic anisotropy in vapor-deposited films of CoPt3 . Energetic considerations of bonding in Co-Pt suggest that Co segregates to step edges due to their low coordination, for all film orientations, while Pt segregates to the two low index surfaces. Coalescence of islands during growth cause these Co-rich step edges to become flat thin Co platelets in a Pt rich matrix, giving rise to the experimentally observed magnetic anisotropy. This proposed model is tested with kinetic Monte Carlo simulation of the vapor deposition growth. A tight-binding, second-moment approximation to the interatomic potential is used to calculate the probability of an atom hopping from one surface site to another, assuming an Arrhenius-like activation model of surface motion. Growth is simulated by allowing many hopping events per adatom. The simulated as-grown films show an asymmetry in Co-Co bonding between the in-plane and out-of-plane directions, in good agreement with experimental data. The growth temperature dependence found in the simulations is strong and similar to that seen in experiments, and an increase in Co edge segregation with increasing temperature is also observed.
González, R C; Alvarez, D; López, A M; Alvarez, J C
2009-12-01
It has been reported that spatio-temporal gait parameters can be estimated using an accelerometer to calculate the vertical displacement of the body's centre of gravity. This method has the potential to produce realistic ambulatory estimations of those parameters during unconstrained walking. In this work, we want to evaluate the crude estimations of mean step length so obtained, for their possible application in the construction of an ambulatory walking distance measurement device. Two methods have been tested with a set of volunteers in 20 m excursions. Experimental results show that estimations of walking distance can be obtained with sufficient accuracy and precision for most practical applications (errors of 3.66 +/- 6.24 and 0.96 +/- 5.55%), the main difficulty being inter-individual variability (biggest deviations of 19.70 and 15.09% for each estimator). Also, the results indicate that an inverted pendulum model for the displacement during the single stance phase, and a constant displacement per step during double stance, constitute a valid model for the travelled distance with no need of further adjustments. It allows us to explain the main part of the erroneous distance estimations in different subjects as caused by fundamental limitations of the simple inverted pendulum approach.
NASA Astrophysics Data System (ADS)
Riyadi, Eko H.
2014-09-01
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logic model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.
NASA Astrophysics Data System (ADS)
Züleyha, Artuç; Ziya, Merdan; Selçuk, Yeşiltaş; Kemal, Öztürk M.; Mesut, Tez
2017-11-01
Computational models for tumors have difficulties due to complexity of tumor nature and capacities of computational tools, however, these models provide visions to understand interactions between tumor and its micro environment. Moreover computational models have potential to develop strategies for individualized treatments for cancer. To observe a solid brain tumor, glioblastoma multiforme (GBM), we present a two dimensional Ising Model applied on Creutz cellular automaton (CCA). The aim of this study is to analyze avascular spherical solid tumor growth, considering transitions between non tumor cells and cancer cells are like phase transitions in physical system. Ising model on CCA algorithm provides a deterministic approach with discrete time steps and local interactions in position space to view tumor growth as a function of time. Our simulation results are given for fixed tumor radius and they are compatible with theoretical and clinic data.
Caputi, Sergio; Varvara, Giuseppe
2008-04-01
Dimensional accuracy when making impressions is crucial to the quality of fixed prosthodontic treatment, and the impression technique is a critical factor affecting this accuracy. The purpose of this in vitro study was to compare the dimensional accuracy of a monophase, 1- and 2-step putty/light-body, and a novel 2-step injection impression technique. A stainless steel model with 2 abutment preparations was fabricated, and impressions were made 15 times with each technique. All impressions were made with an addition-reaction silicone impression material (Aquasil) and a stock perforated metal tray. The monophase impressions were made with regular body material. The 1-step putty/light-body impressions were made with simultaneous use of putty and light-body materials. The 2-step putty/light-body impressions were made with 2-mm-thick resin-prefabricated copings. The 2-step injection impressions were made with simultaneous use of putty and light-body materials. In this injection technique, after removing the preliminary impression, a hole was made through the polymerized material at each abutment edge, to coincide with holes present in the stock trays. Extra-light-body material was then added to the preliminary impression and further injected through the hole after reinsertion of the preliminary impression on the stainless steel model. The accuracy of the 4 different impression techniques was assessed by measuring 3 dimensions (intra- and interabutment) (5-mum accuracy) on stone casts poured from the impressions of the stainless steel model. The data were analyzed by 1-way ANOVA and Student-Newman-Keuls test (alpha=.05). The stone dies obtained with all the techniques had significantly larger dimensions as compared to those of the stainless steel model (P<.01). The order for highest to lowest deviation from the stainless steel model was: monophase, 1-step putty/light body, 2-step putty/light body, and 2-step injection. Significant differences among all of the groups for both absolute dimensions of the stone dies, and their percent deviations from the stainless steel model (P<.01), were noted. The 2-step putty/light-body and 2-step injection techniques were the most dimensionally accurate impression methods in terms of resultant casts.
Terrain and refractivity effects on non-optical paths
NASA Astrophysics Data System (ADS)
Barrios, Amalia E.
1994-07-01
The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.
A method for scenario-based risk assessment for robust aerospace systems
NASA Astrophysics Data System (ADS)
Thomas, Victoria Katherine
In years past, aircraft conceptual design centered around creating a feasible aircraft that could be built and could fly the required missions. More recently, aircraft viability entered into conceptual design, allowing that the product's potential to be profitable should also be examined early in the design process. While examining an aerospace system's feasibility and viability early in the design process is extremely important, it is also important to examine system risk. In traditional aerospace systems risk analysis, risk is examined from the perspective of performance, schedule, and cost. Recently, safety and reliability analysis have been brought forward in the design process to also be examined during late conceptual and early preliminary design. While these analyses work as designed, existing risk analysis methods and techniques are not designed to examine an aerospace system's external operating environment and the risks present there. A new method has been developed here to examine, during the early part of concept design, the risk associated with not meeting assumptions about the system's external operating environment. The risks are examined in five categories: employment, culture, government and politics, economics, and technology. The risks are examined over a long time-period, up to the system's entire life cycle. The method consists of eight steps over three focus areas. The first focus area is Problem Setup. During problem setup, the problem is defined and understood to the best of the decision maker's ability. There are four steps in this area, in the following order: Establish the Need, Scenario Development, Identify Solution Alternatives, and Uncertainty and Risk Identification. There is significant iteration between steps two through four. Focus area two is Modeling and Simulation. In this area the solution alternatives and risks are modeled, and a numerical value for risk is calculated. A risk mitigation model is also created. The four steps involved in completing the modeling and simulation are: Alternative Solution Modeling, Uncertainty Quantification, Risk Assessment, and Risk Mitigation. Focus area three consists of Decision Support. In this area a decision support interface is created that allows for game playing between solution alternatives and risk mitigation. A multi-attribute decision making process is also implemented to aid in decision making. A demonstration problem inspired by Airbus' mid 1980s decision to break into the widebody long-range market was developed to illustrate the use of this method. The results showed that the method is able to capture additional types of risk than previous analysis methods, particularly at the early stages of aircraft design. It was also shown that the method can be used to help create a system that is robust to external environmental factors. The addition of an external environment risk analysis in the early stages of conceptual design can add another dimension to the analysis of feasibility and viability. The ability to take risk into account during the early stages of the design process can allow for the elimination of potentially feasible and viable but too-risky alternatives. The addition of a scenario-based analysis instead of a traditional probabilistic analysis enabled uncertainty to be effectively bound and examined over a variety of potential futures instead of only a single future. There is also potential for a product to be groomed for a specific future that one believes is likely to happen, or for a product to be steered during design as the future unfolds.
Hierarchical modeling of heat transfer in silicon-based electronic devices
NASA Astrophysics Data System (ADS)
Goicochea Pineda, Javier V.
In this work a methodology for the hierarchical modeling of heat transfer in silicon-based electronic devices is presented. The methodology includes three steps to integrate the different scales involved in the thermal analysis of these devices. The steps correspond to: (i) the estimation of input parameters and thermal properties required to solve the Boltzmann transport equation (BTE) for phonons by means of molecular dynamics (MD) simulations, (ii) the quantum correction of some of the properties estimated with MD to make them suitable for BTE and (iii) the numerical solution of the BTE using the lattice Boltzmann method (LBM) under the single mode relaxation time approximation subject to different initial and boundary conditions, including non-linear dispersion relations and different polarizations in the [100] direction. Each step of the methodology is validated with numerical, analytical or experimental reported data. In the first step of the methodology, properties such as, phonon relaxation times, dispersion relations, group and phase velocities and specific heat are obtained with MD at of 300 and 1000 K (i.e. molecular temperatures). The estimation of the properties considers the anhamonic nature of the potential energy function, including the thermal expansion of the crystal. Both effects are found to modify the dispersion relations with temperature. The behavior of the phonon relaxation times for each mode (i.e. longitudinal and transverse, acoustic and optical phonons) is identified using power functions. The exponents of the acoustic modes are agree with those predicted theoretically perturbation theory at high temperatures, while those for the optical modes are higher. All properties estimated with MD are validated with values for the thermal conductivity obtained from the Green-Kubo method. It is found that the relative contribution of acoustic modes to the overall thermal conductivity is approximately 90% at both temperatures. In the second step, two new quantum correction alternatives are applied to correct the results obtained with MD. The alternatives consider the quantization of the energy per phonon mode. In addition, the effect of isotope scattering is included in the phonon-phonon relaxation time values previously determined in the first step. It is found that both the quantization of the energy and the inclusion of scattering with isotopes significant reduce the contribution of high-frequency modes to the overall thermal conductivity. After these two effects are considered, the contribution of optical modes reduces to less than 2.4%. In this step, two sets of properties are obtained. The first one results from the application of quantum corrections to abovementioned properties, while the second is obtained including also the isotope scattering. These sets of properties are identified in this work as isotope-enriched silicon (isoSi) and natural silicon (natSi) and are used along other phonon relaxation time models in the last step of our methodology. Before we solve the BTE using the LBM, a new dispersive lattice Boltzmann formulation is proposed. The new dispersive formulation is based on constant lattice spacings (CLS) and flux limiters, rather than constant time steps (as previously reported). It is found that the new formulation significantly reduces the computation cost and complexity of the solution of the BTE, without affecting the thermal predictions. Lastly, in the last step of our methodology, we solve the BTE. The equation is solved under the relaxation time approximation using our thermal properties estimated for isoSi and natSi and using two phonon formulations. The phonon formulations include a gray model and the new dispersive method. For comparison purposes, the BTE is also solved using the phenomenological and theoretical phonon relaxation time models of Holland, and Han and Klemens. Different thermal predictions in steady and transient states are performed to illustrate the application of the methodology in one- and two-dimensional silicon films and in silicon-over-insulator (SOI) transistors. These include the determination of bulk and film thermal conductivities (i.e. out-of-plane and in-plane), and the transient evolution of the wall heat flux and temperature for films of different thicknesses. In addition, the physics of phonons is further analyzed in terms of the influence and behavior of acoustic and optical modes in the thermal predictions and the effect of phonon confinement in the thermal response of SOI-like transistors subject to different self-heating conditions.
Study of the highly ordered TiO2 nanotubes physical properties prepared with two-step anodization
NASA Astrophysics Data System (ADS)
Pishkar, Negin; Ghoranneviss, Mahmood; Ghorannevis, Zohreh; Akbari, Hossein
2018-06-01
Highly ordered hexagonal closely packed titanium dioxide nanotubes (TiO2 NTs) were successfully grown by a two-step anodization process. The TiO2 NTs were synthesized by electrochemical anodization of titanium foils in an ethylene glycol based electrolyte solution containing 0.3 wt% NH4F and 2 vol% deionized (DI) water at constant potential (50 V) for 1 h at room temperature. Physical properties of the TiO2 NTs, which were prepared via one and two-step anodization, were investigated. Atomic Force Microscopy (AFM) analysis revealed that anodization and subsequently peeled off the TiO2 NTs caused to the periodic pattern on the Ti surface. In order To study the nanotubes morphology, Field Emission Scanning Electron Microscopy (FESEM) was used, which was revealed that the two-step anodization resulted highly ordered hexagonal TiO2 NTs. Crystal structures of the TiO2 NTs were mainly anatase, determined by X-ray diffraction analysis. Optical studies were performed by Diffuse Reflection Spectra (DRS) and Photoluminescence (PL) analysis showed that the band gap of TiO2 NTs prepared via two-step anodization was lower than the band gap of samples prepared by one-step anodization process.
Connectivity among subpopulations of Louisiana black bears as estimated by a step selection function
Clark, Joseph D.; Jared S. Laufenberg,; Maria Davidson,; Jennifer L. Murrow,
2015-01-01
Habitat fragmentation is a fundamental cause of population decline and increased risk of extinction for many wildlife species; animals with large home ranges and small population sizes are particularly sensitive. The Louisiana black bear (Ursus americanus luteolus) exists only in small, isolated subpopulations as a result of land clearing for agriculture, but the relative potential for inter-subpopulation movement by Louisiana black bears has not been quantified, nor have characteristics of effective travel routes between habitat fragments been identified. We placed and monitored global positioning system (GPS) radio collars on 8 female and 23 male bears located in 4 subpopulations in Louisiana, which included a reintroduced subpopulation located between 2 of the remnant subpopulations. We compared characteristics of sequential radiolocations of bears (i.e., steps) with steps that were possible but not chosen by the bears to develop step selection function models based on conditional logistic regression. The probability of a step being selected by a bear increased as the distance to natural land cover and agriculture at the end of the step decreased and as distance from roads at the end of a step increased. To characterize connectivity among subpopulations, we used the step selection models to create 4,000 hypothetical correlated random walks for each subpopulation representing potential dispersal events to estimate the proportion that intersected adjacent subpopulations (hereafter referred to as successful dispersals). Based on the models, movement paths for males intersected all adjacent subpopulations but paths for females intersected only the most proximate subpopulations. Cross-validation and genetic and independent observation data supported our findings. Our models also revealed that successful dispersals were facilitated by a reintroduced population located between 2 distant subpopulations. Successful dispersals for males were dependent on natural land cover in private ownership. The addition of hypothetical 1,000-m- or 3,000-m-wide corridors between the 4 study areas had minimal effects on connectivity among subpopulations. For females, our model suggested that habitat between subpopulations would probably have to be permanently occupied for demographic rescue to occur. Thus, the establishment of stepping-stone populations, such as the reintroduced population that we studied, may be a more effective conservation measure than long corridors without a population presence in between.
Sansinena, Marina; Santos, Maria Victoria; Chirife, Jorge; Zaritzky, Noemi
2018-05-01
Heat transfer during cooling and warming is difficult to measure in cryo-devices; mathematical modelling is an alternative method that can describe these processes. In this study, we tested the validity of one such model by assessing in-vitro development of vitrified and warmed bovine oocytes after parthenogenetic activation and culture. The viability of oocytes vitrified in four different cryo-devices was assessed. Consistent with modelling predictions, oocytes vitrified using cryo-devices with the highest modelled cooling rates had significantly (P < 0.05) better cleavage and blastocyst formation rates. We then evaluated a two-step sample removal process, in which oocytes were held in nitrogen vapour for 15 s to simulate sample identification during clinical application, before being removed completely and warmed. Oocytes exposed to this procedure showed reduced developmental potential, according to the model, owing to thermodynamic instability and devitrification at relatively low temperatures. These findings suggest that cryo-device selection and handling, including method of removal from nitrogen storage, are critical to survival of vitrified oocytes. Limitations of the study include use of parthenogenetically activated rather than fertilized ova and lack of physical measurement of recrystallization. We suggest mathematical modelling could be used to predict the effect of critical steps in cryopreservation. Copyright © 2018 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
Sharma, Nripen S.; Jindal, Rohit; Mitra, Bhaskar; Lee, Serom; Li, Lulu; Maguire, Tim J.; Schloss, Rene; Yarmush, Martin L.
2014-01-01
Skin sensitization remains a major environmental and occupational health hazard. Animal models have been used as the gold standard method of choice for estimating chemical sensitization potential. However, a growing international drive and consensus for minimizing animal usage have prompted the development of in vitro methods to assess chemical sensitivity. In this paper, we examine existing approaches including in silico models, cell and tissue based assays for distinguishing between sensitizers and irritants. The in silico approaches that have been discussed include Quantitative Structure Activity Relationships (QSAR) and QSAR based expert models that correlate chemical molecular structure with biological activity and mechanism based read-across models that incorporate compound electrophilicity. The cell and tissue based assays rely on an assortment of mono and co-culture cell systems in conjunction with 3D skin models. Given the complexity of allergen induced immune responses, and the limited ability of existing systems to capture the entire gamut of cellular and molecular events associated with these responses, we also introduce a microfabricated platform that can capture all the key steps involved in allergic contact sensitivity. Finally, we describe the development of an integrated testing strategy comprised of two or three tier systems for evaluating sensitization potential of chemicals. PMID:24741377
Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.
Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179
Rotolo, Federico; Paoletti, Xavier; Burzykowski, Tomasz; Buyse, Marc; Michiels, Stefan
2017-01-01
Surrogate endpoints are often used in clinical trials instead of well-established hard endpoints for practical convenience. The meta-analytic approach relies on two measures of surrogacy: one at the individual level and one at the trial level. In the survival data setting, a two-step model based on copulas is commonly used. We present a new approach which employs a bivariate survival model with an individual random effect shared between the two endpoints and correlated treatment-by-trial interactions. We fit this model using auxiliary mixed Poisson models. We study via simulations the operating characteristics of this mixed Poisson approach as compared to the two-step copula approach. We illustrate the application of the methods on two individual patient data meta-analyses in gastric cancer, in the advanced setting (4069 patients from 20 randomized trials) and in the adjuvant setting (3288 patients from 14 randomized trials).
ERIC Educational Resources Information Center
Williams, Miriam F.
2012-01-01
The author uses game theoretical models to identify technical communication breakdowns encountered during the notoriously confusing Texas Two-Step voting and caucusing process. Specifically, the author uses narrative theory and game theory to highlight areas where caucus participants needed instructions to better understand the rules of the game…
Cognitive and emotional factors associated with elective breast augmentation among young women.
Moser, Stephanie E; Aiken, Leona S
2011-01-01
The purpose of this research was to propose and evaluate a psychosocial model of young women's intentions to obtain breast implants and the preparatory steps taken towards having breast implant surgery. The model integrated anticipated regret, descriptive norms and image norms from the media into the theory of planned behaviour (TPB). Focus groups (n = 58) informed development of measures of outcome expectancies, preparatory steps and normative influence. The model was tested and replicated among two samples of young women who had ever considered getting breast implants (n = 200, n = 152). Intentions and preparatory steps served as outcomes. Model constructs and outcomes were initially assessed; outcomes were re-assessed 11 weeks later. Evaluative attitudes and anticipated regret predicted intentions; in turn, intentions, along with descriptive norms, predicted subsequent preparatory steps. Perceived risk (susceptibility, severity) of negative medical consequences of breast implants predicted anticipated regret, which predicted evaluative attitudes. Intentions and preparatory steps exhibited interplay over time. This research provides the first comprehensive model predicting intentions and preparatory steps towards breast augmentation surgery. It supports the addition of anticipated regret to the TPB and suggests mutual influence between intentions and preparatory steps towards a final behavioural outcome.
Simulation of drift of pesticides: development and validation of a model.
Brusselman, E; Spanoghe, P; Van der Meeren, P; Gabriels, D; Steurbaut, W
2003-01-01
Over the last decade drift of pesticides has been recognized as a major problem for the environment. High fractions of pesticides can be transported through the air and deposited in neighbouring ecosystems during and after application. A new computer-two steps-drift model is developed: FYDRIMO or F(ph)Ysical DRift MOdel. In the first step the droplet size spectrum of a nozzle is analysed. In this way the volume percentage of droplets with a certain size is known. In the second step the model results in a prediction of deposition of each droplet with a certain size. This second part of the model runs in MATLAB and is grounded on a combination of two physical factors: gravity force and friction forces. In this stage of development corrections are included for evaporation and wind force following a certain measured wind profile. For validation wind tunnel experiments were performed. Salt solutions were sprayed at two wind velocities and variable distance above the floor. Small gutters in the floor filled with filter paper were used to collect the sprayed droplets. After analysing and comparing the wind tunnel results with the model predictions, FYDRIMO seems to have good predicting capacities.
Design of a Two-Step Calibration Method of Kinematic Parameters for Serial Robots
NASA Astrophysics Data System (ADS)
WANG, Wei; WANG, Lei; YUN, Chao
2017-03-01
Serial robots are used to handle workpieces with large dimensions, and calibrating kinematic parameters is one of the most efficient ways to upgrade their accuracy. Many models are set up to investigate how many kinematic parameters can be identified to meet the minimal principle, but the base frame and the kinematic parameter are indistinctly calibrated in a one-step way. A two-step method of calibrating kinematic parameters is proposed to improve the accuracy of the robot's base frame and kinematic parameters. The forward kinematics described with respect to the measuring coordinate frame are established based on the product-of-exponential (POE) formula. In the first step the robot's base coordinate frame is calibrated by the unit quaternion form. The errors of both the robot's reference configuration and the base coordinate frame's pose are equivalently transformed to the zero-position errors of the robot's joints. The simplified model of the robot's positioning error is established in second-power explicit expressions. Then the identification model is finished by the least square method, requiring measuring position coordinates only. The complete subtasks of calibrating the robot's 39 kinematic parameters are finished in the second step. It's proved by a group of calibration experiments that by the proposed two-step calibration method the average absolute accuracy of industrial robots is updated to 0.23 mm. This paper presents that the robot's base frame should be calibrated before its kinematic parameters in order to upgrade its absolute positioning accuracy.
An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.
French, D A; Gruenstein, E I
2006-12-01
It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.
Efficient numerical modeling of the cornea, and applications
NASA Astrophysics Data System (ADS)
Gonzalez, L.; Navarro, Rafael M.; Hdez-Matamoros, J. L.
2004-10-01
Corneal topography has shown to be an essential tool in the ophthalmology clinic both in diagnosis and custom treatments (refractive surgery, keratoplastia), having also a strong potential in optometry. The post processing and analysis of corneal elevation, or local curvature data, is a necessary step to refine the data and also to extract relevant information for the clinician. In this context a parametric cornea model is proposed consisting of a surface described mathematically by two terms: one general ellipsoid corresponding to a regular base surface, expressed by a general quadric term located at an arbitrary position and free orientation in 3D space and a second term, described by a Zernike polynomial expansion, which accounts for irregularities and departures from the basic geometry. The model has been validated obtaining better adjustment of experimental data than other previous models. Among other potential applications, here we present the determination of the optical axis of the cornea by transforming the general quadric to its canonical form. This has permitted us to perform 3D registration of corneal topographical maps to improve the signal-to-noise ratio. Other basic and clinical applications are also explored.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2013-01-01
Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728
Feature saltation and the evolution of mimicry.
Gamberale-Stille, Gabriella; Balogh, Alexandra C V; Tullberg, Birgitta S; Leimar, Olof
2012-03-01
In Batesian mimicry, a harmless prey species imitates the warning coloration of an unpalatable model species. A traditional suggestion is that mimicry evolves in a two-step process, in which a large mutation first achieves approximate similarity to the model, after which smaller changes improve the likeness. However, it is not known which aspects of predator psychology cause the initial mutant to be perceived by predators as being similar to the model, leaving open the question of how the crucial first step of mimicry evolution occurs. Using theoretical evolutionary simulations and reconstruction of examples of mimicry evolution, we show that the evolution of Batesian mimicry can be initiated by a mutation that causes prey to acquire a trait that is used by predators as a feature to categorize potential prey as unsuitable. The theory that species gain entry to mimicry through feature saltation allows us to formulate scenarios of the sequence of events during mimicry evolution and to reconstruct an initial mimetic appearance for important examples of Batesian mimicry. Because feature-based categorization by predators entails a qualitative distinction between nonmimics and passable mimics, the theory can explain the occurrence of imperfect mimicry. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print
ERIC Educational Resources Information Center
Knowlton, Steven A.
2016-01-01
Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…
Elmitwalli, T A; Sayed, S; Groendijk, L; van Lier, J; Zeeman, G; Lettinga, G
2003-01-01
The decentralised treatment of concentrated sewage (about 3,600 mgCOD/l) at low temperature was investigated in a two-step anaerobic system: two-anaerobic hybrid (AH) septic tanks (each 0.575 m3). The two reactors were placed in a temperature controlled-room and the HRT was 2.5 days for each reactor. The system was fed with concentrated domestic sewage, mainly black water from about 40 toilets flushed with only 4 litre of water and a limited amount of grey water. The system showed high removal efficiency for the different COD fractions. Mean removal efficiencies in the two-step AH-septic tank at 5 days HRT and 13 degrees C were 94, 98, 74 and 78% for total COD, suspended COD, colloidal COD and dissolved COD respectively. The results of short run experiments indicated that the presence of reticulated polyurethane foam (RPF) media in the AH-septic tank improved the removal of suspended COD by 22%. The first AH-septic tank was full of sludge after 4 months of operation due to the high removal of particulate COD and the limited hydrolysis at low temperature conditions. Therefore, a simple mathematical model was developed based on ADM1 (the IWA model in 2002). Based on the experimental results and the mathematical model, only a one-step AH septic tank is required. An HRT of 5.5-7.5 days is needed for that one-step AH septic tank to treat concentrated sewage at a low temperature of 13 degrees C. Such a system can provide a total COD removal as high as 87% and will be full of sludge after a period of more than a year.
Interactive Inverse Groundwater Modeling - Addressing User Fatigue
NASA Astrophysics Data System (ADS)
Singh, A.; Minsker, B. S.
2006-12-01
This paper builds on ongoing research on developing an interactive and multi-objective framework to solve the groundwater inverse problem. In this work we solve the classic groundwater inverse problem of estimating a spatially continuous conductivity field, given field measurements of hydraulic heads. The proposed framework is based on an interactive multi-objective genetic algorithm (IMOGA) that not only considers quantitative measures such as calibration error and degree of regularization, but also takes into account expert knowledge about the structure of the underlying conductivity field expressed as subjective rankings of potential conductivity fields by the expert. The IMOGA converges to the optimal Pareto front representing the best trade- off among the qualitative as well as quantitative objectives. However, since the IMOGA is a population-based iterative search it requires the user to evaluate hundreds of solutions. This leads to the problem of 'user fatigue'. We propose a two step methodology to combat user fatigue in such interactive systems. The first step is choosing only a few highly representative solutions to be shown to the expert for ranking. Spatial clustering is used to group the search space based on the similarity of the conductivity fields. Sampling is then carried out from different clusters to improve the diversity of solutions shown to the user. Once the expert has ranked representative solutions from each cluster a machine learning model is used to 'learn user preference' and extrapolate these for the solutions not ranked by the expert. We investigate different machine learning models such as Decision Trees, Bayesian learning model, and instance based weighting to model user preference. In addition, we also investigate ways to improve the performance of these models by providing information about the spatial structure of the conductivity fields (which is what the expert bases his or her rank on). Results are shown for each of these machine learning models and the advantages and disadvantages for each approach are discussed. These results indicate that using the proposed two-step methodology leads to significant reduction in user-fatigue without deteriorating the solution quality of the IMOGA.
Electrohydraulic linear actuator with two stepping motors controlled by overshoot-free algorithm
NASA Astrophysics Data System (ADS)
Milecki, Andrzej; Ortmann, Jarosław
2017-11-01
The paper describes electrohydraulic spool valves with stepping motors used as electromechanical transducers. A new concept of a proportional valve in which two stepping motors are working differentially is introduced. Such valve changes the fluid flow proportionally to the sum or difference of the motors' steps numbers. The valve design and principle of its operation is described. Theoretical equations and simulation models are proposed for all elements of the drive, i.e., the stepping motor units, hydraulic valve and cylinder. The main features of the valve and drive operation are described; some specific problem areas covering the nature of stepping motors and their differential work in the valve are also considered. The whole servo drive non-linear model is proposed and used further for simulation investigations. The initial simulation investigations of the drive with a new valve have shown that there is a significant overshoot in the drive step response, which is not allowed in positioning process. Therefore additional effort is spent to reduce the overshoot and in consequence reduce the settling time. A special predictive algorithm is proposed to this end. Then the proposed control method is tested and further improved in simulations. Further on, the model is implemented in reality and the whole servo drive system is tested. The investigation results presented in this paper, are showing an overshoot-free positioning process which enables high positioning accuracy.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
Computer software tool REALM for sustainable water allocation and management.
Perera, B J C; James, B; Kularathna, M D U
2005-12-01
REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.
Stochastic road excitation and control feasibility in a 2D linear tyre model
NASA Astrophysics Data System (ADS)
Rustighi, E.; Elliott, S. J.
2007-03-01
For vehicle under normal driving conditions and speeds above 30-40 km/h the dominating internal and external noise source is the sound generated by the interaction between the tyre and the road. This paper presents a simple model to predict tyre behaviour in the frequency range up to 400 Hz, where the dominant vibration is two dimensional. The tyre is modelled as an elemental system, which permits the analysis of the low-frequency tyre response when excited by distributed stochastic displacements in the contact patch. A linear model has been used to calculate the contact forces from the road roughness and thus calculate the average spectral properties of the resulting radial velocity of the tyre in one step from the spectral properties of the road roughness. Such a model has also been used to provide an estimate of the potential effect of various active control strategies for reducing the tyre vibrations.
NASA Astrophysics Data System (ADS)
Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole
2018-04-01
We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.
Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole
2018-04-13
We simulate the dynamics of a single-electron source, modeled as a quantum dot with on-site Coulomb interaction and tunnel coupling to an adjacent lead in time-dependent density-functional theory. Based on this system, we develop a time-nonlocal exchange-correlation potential by exploiting analogies with quantum-transport theory. The time nonlocality manifests itself in a dynamical potential step. We explicitly link the time evolution of the dynamical step to physical relaxation timescales of the electron dynamics. Finally, we discuss prospects for simulations of larger mesoscopic systems.
Comparison of two integration methods for dynamic causal modeling of electrophysiological data.
Lemaréchal, Jean-Didier; George, Nathalie; David, Olivier
2018-06-01
Dynamic causal modeling (DCM) is a methodological approach to study effective connectivity among brain regions. Based on a set of observations and a biophysical model of brain interactions, DCM uses a Bayesian framework to estimate the posterior distribution of the free parameters of the model (e.g. modulation of connectivity) and infer architectural properties of the most plausible model (i.e. model selection). When modeling electrophysiological event-related responses, the estimation of the model relies on the integration of the system of delay differential equations (DDEs) that describe the dynamics of the system. In this technical note, we compared two numerical schemes for the integration of DDEs. The first, and standard, scheme approximates the DDEs (more precisely, the state of the system, with respect to conduction delays among brain regions) using ordinary differential equations (ODEs) and solves it with a fixed step size. The second scheme uses a dedicated DDEs solver with adaptive step sizes to control error, making it theoretically more accurate. To highlight the effects of the approximation used by the first integration scheme in regard to parameter estimation and Bayesian model selection, we performed simulations of local field potentials using first, a simple model comprising 2 regions and second, a more complex model comprising 6 regions. In these simulations, the second integration scheme served as the standard to which the first one was compared. Then, the performances of the two integration schemes were directly compared by fitting a public mismatch negativity EEG dataset with different models. The simulations revealed that the use of the standard DCM integration scheme was acceptable for Bayesian model selection but underestimated the connectivity parameters and did not allow an accurate estimation of conduction delays. Fitting to empirical data showed that the models systematically obtained an increased accuracy when using the second integration scheme. We conclude that inference on connectivity strength and delay based on DCM for EEG/MEG requires an accurate integration scheme. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Identification of nonlinear normal modes of engineering structures under broadband forcing
NASA Astrophysics Data System (ADS)
Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.
2016-06-01
The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.
A two-step mechanism for stem cell activation during hair regeneration.
Greco, Valentina; Chen, Ting; Rendl, Michael; Schober, Markus; Pasolli, H Amalia; Stokes, Nicole; Dela Cruz-Racelis, June; Fuchs, Elaine
2009-02-06
Hair follicles (HFs) undergo cyclic bouts of degeneration, rest, and regeneration. During rest (telogen), the hair germ (HG) appears as a small cell cluster between the slow-cycling bulge and dermal papilla (DP). Here we show that HG cells are derived from bulge stem cells (SCs) but become responsive quicker to DP-promoting signals. In vitro, HG cells also proliferate sooner but display shorter-lived potential than bulge cells. Molecularly, they more closely resemble activated bulge rather than transit-amplifying (matrix) cells. Transcriptional profiling reveals precocious activity of both HG and DP in late telogen, accompanied by Wnt signaling in HG and elevated FGFs and BMP inhibitors in DP. FGFs and BMP inhibitors participate with Wnts in exerting selective and potent stimuli to the HG both in vivo and in vitro. Our findings suggest a model where HG cells fuel initial steps in hair regeneration, while the bulge is the engine maintaining the process.
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
Thoury-Monbrun, Valentin; Gaucel, Sébastien; Rouessac, Vincent; Guillard, Valérie; Angellier-Coussy, Hélène
2018-06-15
This study aims at assessing the use of a quartz crystal microbalance (QCM) coupled with an adsorption system to measure water vapor transfer properties in micrometric size cellulose particles. This apparatus allows measuring successfully water vapor sorption kinetics at successive relative humidity (RH) steps on a dispersion of individual micrometric size cellulose particles (1 μg) with a total acquisition duration of the order of one hour. Apparent diffusivity and water uptake at equilibrium were estimated at each step of RH by considering two different particle geometries in mass transfer modeling, i.e. sphere or finite cylinder, based on the results obtained from image analysis. Water vapor diffusivity values varied from 2.4 × 10 -14 m 2 s -1 to 4.2 × 10 -12 m 2 s -1 over the tested RH range (0-80%) whatever the model used. A finite cylinder or spherical geometry could be used equally for diffusivity identification for a particle size aspect ratio lower than 2. Copyright © 2018 Elsevier Ltd. All rights reserved.
2017-01-01
Two novel routes for the production of gasoline from pyrolysis oil (from timber pine) and biogas (from ley grass) are simulated, followed by a cradle-to-gate life-cycle assessment of the two production routes. The main aim of this work is to conduct a holistic evaluation of the proposed routes and benchmark them against the conventional route of producing gasoline from natural gas. A previously commercialized method of synthesizing gasoline involves conversion of natural gas to syngas, which is further converted to methanol, and then as a last step, the methanol is converted to gasoline. In the new proposed routes, the syngas production step is different; syngas is produced from a mixture of pyrolysis oil and biogas in the following two ways: (i) autothermal reforming of pyrolysis oil and biogas, in which there are two reactions in one reactor (ATR) and (ii) steam reforming of pyrolysis oil and catalytic partial oxidation of biogas, in which there are separated but thermally coupled reactions and reactors (CR). The other two steps to produce methanol from syngas, and gasoline from methanol, remain the same. The purpose of this simulation is to have an ex-ante comparison of the performance of the new routes against a reference, in terms of energy and sustainability. Thus, at this stage of simulations, nonrigorous, equilibrium-based models have been used for reactors, which will give the best case conversions for each step. For the conventional production route, conversion and yield data available in the literature have been used, wherever available.The results of the process design showed that the second method (separate, but thermally coupled reforming) has a carbon efficiency of 0.53, compared to the conventional route (0.48), as well as the first route (0.40). The life-cycle assessment results revealed that the newly proposed processes have a clear advantage over the conventional process in some categories, particularly the global warming potential and primary energy demand; but there are also some in which the conventional route fares better, such as the human toxicity potential and the categories related to land-use change such as biotic production potential and the groundwater resistance indicator. The results confirmed that even though using biomass such as timber pine as raw material does result in reduced greenhouse gas emissions, the activities associated with biomass, such as cultivation and harvesting, contribute to the environmental footprint, particularly the land use change categories. This gives an impetus to investigate the potential of agricultural, forest, or even food waste, which would be likely to have a substantially lower impact on the environment. Moreover, it could be seen that the source of electricity used in the process has a major impact on the environmental performance. PMID:28405056
Steps in the bacterial flagellar motor.
Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S
2009-10-01
The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.
Memari, Sahel; Le Bozec, Serge; Bouisset, Simon
2014-02-21
This research deals with the postural adjustments that occur after the end of voluntary movement ("consecutive postural adjustments": CPAs). The influence of a potentially slippery surface on CPA characteristics was considered, with the aim of exploring more deeply the postural component of the task-movement. Seven male adults were asked to perform a single step, as quickly as possible, to their own footprint marked on the ground. A force plate measured the resultant reaction forces along the antero-posterior axis (R(x)) and the centre of pressure (COP) displacements along the antero-posterior and lateral axes (Xp and Yp). The velocity of the centre of gravity (COG) along the antero-posterior axis and the corresponding impulse (∫R(x)dt) were calculated; the peak velocity (termed "progression velocity": V(xG)) was measured. The required coefficient of friction (RCOF) along the progression axis (pμ(x)) was determined. Two materials, differing by their COF, were laid at foot contact (FC), providing a rough foot contact (RoFC), and a smooth foot contact (SmFC) considered to be potentially slippery. Two step lengths were also performed: a short step (SS) and a long step (LS). Finally, the subjects completed four series of ten steps each. These were preceded by preliminary trials, to allow them to acquire the necessary adaptation to experimental conditions. The antero-posterior force time course presented a positive phase, that included APAs ("anticipatory postural adjustments") and step execution (STEP), followed by a negative one, corresponding to CPAs. The backward impulse (CPI) was equal to the forward one (BPI), independently of friction and progression velocity. Moreover, V(xG) did not differ according to friction, but was faster when the step length was greater. Last CPA peak amplitudes (pCPA) were significantly greater and CPA durations (dCPA) shorter for RoFC and conversely for SmFC, contrary to APA. Finally, the results show a particular adaptation to the potentially slippery surface (SmFC). They suggest that adherence modulation at foot contact could be one of the rules for controlling COG displacement in single stepping. Consequently, the actual coefficient of friction value might be implemented in the motor programme at a higher level than the voluntary movement specific parameters. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Enriching step-based product information models to support product life-cycle activities
NASA Astrophysics Data System (ADS)
Sarigecili, Mehmet Ilteris
The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.
The energy spectra of solar flare electrons
NASA Technical Reports Server (NTRS)
Evenson, P. A.; Hovestadt, D.; Meyer, P.; Moses, D.
1985-01-01
A survey of 50 electron energy spectra from .1 to 100 MeV originating from solar flares was made by the combination of data from two spectrometers onboard the International Sun Earth Explorer-3 spacecraft. The observed spectral shapes of flare events can be divided into two classes through the criteria of fit to an acceleration model. This standard two step acceleration model, which fits the spectral shape of the first class of flares, involves an impulsive step that accelerates particles up to 100 keV and a second step that further accelerates these particles up to 100 MeV by a single shock. This fit fails for the second class of flares that can be characterized as having excessively hard spectra above 1 MeV relative to the predictions of the model. Correlations with soft X-ray and meter radio observations imply that the acceleration of the high energy particles in the second class of flares is dominated by the impulsive phase of the flares.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Matinelli, L.
1994-01-01
The steady state solution of the system of equations consisting of the full Navier-Stokes equations and two turbulence equations has been obtained using a multigrid strategy of unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time-stepping scheme with a stability-bound local time step, while turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positivity. Low-Reynolds-number modifications to the original two-equation model are incorporated in a manner which results in well-behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved, initializing all quantities with uniform freestream values. Rapid and uniform convergence rates for the flow and turbulence equations are observed.
Phase transition solutions in geometrically constrained magnetic domain wall models
NASA Astrophysics Data System (ADS)
Chen, Shouxin; Yang, Yisong
2010-02-01
Recent work on magnetic phase transition in nanoscale systems indicates that new physical phenomena, in particular, the Bloch wall width narrowing, arise as a consequence of geometrical confinement of magnetization and leads to the introduction of geometrically constrained domain wall models. In this paper, we present a systematic mathematical analysis on the existence of the solutions of the basic governing equations in such domain wall models. We show that, when the cross section of the geometric constriction is a simple step function, the solutions may be obtained by minimizing the domain wall energy over the constriction and solving the Bogomol'nyi equation outside the constriction. When the cross section and potential density are both even, we establish the existence of an odd domain wall solution realizing the phase transition process between two adjacent domain phases. When the cross section satisfies a certain integrability condition, we prove that a domain wall solution always exists which links two arbitrarily designated domain phases.
Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N
2017-01-25
This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.
Photopolymerization Of Levitated Droplets
NASA Technical Reports Server (NTRS)
Rembaum, Alan; Rhim, Won-Kyu; Hyson, Michael T.; Chang, Manchium
1989-01-01
Experimental containerless process combines two established techniques to make variety of polymeric microspheres. In single step, electrostatically-levitated monomer droplets polymerized by ultraviolet light. Faster than multiple-step emulsion polymerization process used to make microspheres. Droplets suspended in cylindrical quadrupole electrostatic levitator. Alternating electrostatic field produces dynamic potential along axis. Process enables tailoring of microspheres for medical, scientific, and industrial applications.
Meeting Wise: Making the Most of Collaborative Time for Educators
ERIC Educational Resources Information Center
Boudett, Kathryn Parker; City, Elizabeth A.
2014-01-01
This book, by two editors of "Data Wise: A Step-by-Step Guide to Using Assessment Results to Improve Teaching and Learning," attempts to bring about a fundamental shift in how educators think about the meetings we attend. They make the case that these gatherings are potentially the most important venue where adult and organizational…
NASA Astrophysics Data System (ADS)
Leier, André; Marquez-Lago, Tatiana T.; Burrage, Kevin
2008-05-01
The delay stochastic simulation algorithm (DSSA) by Barrio et al. [Plos Comput. Biol. 2, 117(E) (2006)] was developed to simulate delayed processes in cell biology in the presence of intrinsic noise, that is, when there are small-to-moderate numbers of certain key molecules present in a chemical reaction system. These delayed processes can faithfully represent complex interactions and mechanisms that imply a number of spatiotemporal processes often not explicitly modeled such as transcription and translation, basic in the modeling of cell signaling pathways. However, for systems with widely varying reaction rate constants or large numbers of molecules, the simulation time steps of both the stochastic simulation algorithm (SSA) and the DSSA can become very small causing considerable computational overheads. In order to overcome the limit of small step sizes, various τ-leap strategies have been suggested for improving computational performance of the SSA. In this paper, we present a binomial τ-DSSA method that extends the τ-leap idea to the delay setting and avoids drawing insufficient numbers of reactions, a common shortcoming of existing binomial τ-leap methods that becomes evident when dealing with complex chemical interactions. The resulting inaccuracies are most evident in the delayed case, even when considering reaction products as potential reactants within the same time step in which they are produced. Moreover, we extend the framework to account for multicellular systems with different degrees of intercellular communication. We apply these ideas to two important genetic regulatory models, namely, the hes1 gene, implicated as a molecular clock, and a Her1/Her 7 model for coupled oscillating cells.
Study of CdTe quantum dots grown using a two-step annealing method
NASA Astrophysics Data System (ADS)
Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.
2006-02-01
High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.
A computer model for predicting grapevine cold hardiness
USDA-ARS?s Scientific Manuscript database
We developed a robust computer model of grapevine bud cold hardiness that will aid in the anticipation of and response to potential injury from fluctuations in winter temperature and from extreme cold events. The model uses time steps of 1 day along with the measured daily mean air temperature to ca...
Modeling study on the cleavage step of the self-splicing reaction in group I introns
NASA Technical Reports Server (NTRS)
Setlik, R. F.; Garduno-Juarez, R.; Manchester, J. I.; Shibata, M.; Ornstein, R. L.; Rein, R.
1993-01-01
A three-dimensional model of the Tetrahymena thermophila group I intron is used to further explore the catalytic mechanism of the transphosphorylation reaction of the cleavage step. Based on the coordinates of the catalytic core model proposed by Michel and Westhof (Michel, F., Westhof, E. J. Mol. Biol. 216, 585-610 (1990)), we first converted their ligation step model into a model of the cleavage step by the substitution of several bases and the removal of helix P9. Next, an attempt to place a trigonal bipyramidal transition state model in the active site revealed that this modified model for the cleavage step could not accommodate the transition state due to insufficient space. A lowering of P1 helix relative to surrounding helices provided the additional space required. Simultaneously, it provided a better starting geometry to model the molecular contacts proposed by Pyle et al. (Pyle, A. M., Murphy, F. L., Cech, T. R. Nature 358, 123-128. (1992)), based on mutational studies involving the J8/7 segment. Two hydrated Mg2+ complexes were placed in the active site of the ribozyme model, using the crystal structure of the functionally similar Klenow fragment (Beese, L.S., Steitz, T.A. EMBO J. 10, 25-33 (1991)) as a guide. The presence of two metal ions in the active site of the intron differs from previous models, which incorporate one metal ion in the catalytic site to fulfill the postulated roles of Mg2+ in catalysis. The reaction profile is simulated based on a trigonal bipyramidal transition state, and the role of the hydrated Mg2+ complexes in catalysis is further explored using molecular orbital calculations.
Very large scale monoclonal antibody purification: the case for conventional unit operations.
Kelley, Brian
2007-01-01
Technology development initiatives targeted for monoclonal antibody purification may be motivated by manufacturing limitations and are often aimed at solving current and future process bottlenecks. A subject under debate in many biotechnology companies is whether conventional unit operations such as chromatography will eventually become limiting for the production of recombinant protein therapeutics. An evaluation of the potential limitations of process chromatography and filtration using today's commercially available resins and membranes was conducted for a conceptual process scaled to produce 10 tons of monoclonal antibody per year from a single manufacturing plant, a scale representing one of the world's largest single-plant capacities for cGMP protein production. The process employs a simple, efficient purification train using only two chromatographic and two ultrafiltration steps, modeled after a platform antibody purification train that has generated 10 kg batches in clinical production. Based on analyses of cost of goods and the production capacity of this very large scale purification process, it is unlikely that non-conventional downstream unit operations would be needed to replace conventional chromatographic and filtration separation steps, at least for recombinant antibodies.
Analysis of two-equation turbulence models for recirculating flows
NASA Technical Reports Server (NTRS)
Thangam, S.
1991-01-01
The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.
Peer Power. Book 2, Applying Peer Helper Skills. Second Edition.
ERIC Educational Resources Information Center
Tindall, Judith A.
A step-by-step model for training peer counselors forms the basis of the trainer's manual and accompanying exercises for trainees which are organized into two books for effective skill building. Designed for peer counseling trainees, this document presents the second of these two exercise books. The book begins with a brief introduction to…
Assessment of turbulent models for scramjet flowfields
NASA Technical Reports Server (NTRS)
Sindir, M. M.; Harsha, P. T.
1982-01-01
The behavior of several turbulence models applied to the prediction of scramjet combustor flows is described. These models include the basic two equation model, the multiple dissipation length scale variant of the two equation model, and the algebraic stress model (ASM). Predictions were made of planar backward facing step flows and axisymmetric sudden expansion flows using each of these approaches. The formulation of each of these models are discussed, and the application of the different approaches to supersonic flows is described. A modified version of the ASM is found to provide the best prediction of the planar backward facing step flow in the region near the recirculation zone, while the basic ASM provides the best results downstream of the recirculation. Aspects of the interaction of numerica modeling and turbulences modeling as they affect the assessment of turbulence models are discussed.
Dynamic Predictive Model for Growth of Bacillus cereus from Spores in Cooked Beans.
Juneja, Vijay K; Mishra, Abhinav; Pradhan, Abani K
2018-02-01
Kinetic growth data for Bacillus cereus grown from spores were collected in cooked beans under several isothermal conditions (10 to 49°C). Samples were inoculated with approximately 2 log CFU/g heat-shocked (80°C for 10 min) spores and stored at isothermal temperatures. B. cereus populations were determined at appropriate intervals by plating on mannitol-egg yolk-polymyxin agar and incubating at 30°C for 24 h. Data were fitted into Baranyi, Huang, modified Gompertz, and three-phase linear primary growth models. All four models were fitted to the experimental growth data collected at 13 to 46°C. Performances of these models were evaluated based on accuracy and bias factors, the coefficient of determination ( R 2 ), and the root mean square error. Based on these criteria, the Baranyi model best described the growth data, followed by the Huang, modified Gompertz, and three-phase linear models. The maximum growth rates of each primary model were fitted as a function of temperature using the modified Ratkowsky model. The high R 2 values (0.95 to 0.98) indicate that the modified Ratkowsky model can be used to describe the effect of temperature on the growth rates for all four primary models. The acceptable prediction zone (APZ) approach also was used for validation of the model with observed data collected during single and two-step dynamic cooling temperature protocols. When the predictions using the Baranyi model were compared with the observed data using the APZ analysis, all 24 observations for the exponential single rate cooling were within the APZ, which was set between -0.5 and 1 log CFU/g; 26 of 28 predictions for the two-step cooling profiles also were within the APZ limits. The developed dynamic model can be used to predict potential B. cereus growth from spores in beans under various temperature conditions or during extended chilling of cooked beans.
Satellite Power Systems (SPS). LSST systems and integration task for SPS flight test article
NASA Technical Reports Server (NTRS)
Greenberg, H. S.
1981-01-01
This research activity emphasizes the systems definition and resulting structural requirements for the primary structure of two potential SPS large space structure test articles. These test articles represent potential steps in the SPS research and technology development.
Towards numerical prediction of cavitation erosion.
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-10-06
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s(-1)). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction.
Towards numerical prediction of cavitation erosion
Fivel, Marc; Franc, Jean-Pierre; Chandra Roy, Samir
2015-01-01
This paper is intended to provide a potential basis for a numerical prediction of cavitation erosion damage. The proposed method can be divided into two steps. The first step consists in determining the loading conditions due to cavitation bubble collapses. It is shown that individual pits observed on highly polished metallic samples exposed to cavitation for a relatively small time can be considered as the signature of bubble collapse. By combining pitting tests with an inverse finite-element modelling (FEM) of the material response to a representative impact load, loading conditions can be derived for each individual bubble collapse in terms of stress amplitude (in gigapascals) and radial extent (in micrometres). This step requires characterizing as accurately as possible the properties of the material exposed to cavitation. This characterization should include the effect of strain rate, which is known to be high in cavitation erosion (typically of the order of several thousands s−1). Nanoindentation techniques as well as compressive tests at high strain rate using, for example, a split Hopkinson pressure bar test system may be used. The second step consists in developing an FEM approach to simulate the material response to the repetitive impact loads determined in step 1. This includes a detailed analysis of the hardening process (isotropic versus kinematic) in order to properly account for fatigue as well as the development of a suitable model of material damage and failure to account for mass loss. Although the whole method is not yet fully operational, promising results are presented that show that such a numerical method might be, in the long term, an alternative to correlative techniques used so far for cavitation erosion prediction. PMID:26442139
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holloway, L.J.; Andrae, R.W.
1981-09-01
This report describes results of a parametric study of the impacts of a tornado-generated depressurization on airflow in the contaminated process cells within the presently inoperative Nuclear Fuel Services fuel reprocessing facility near West Valley, NY. The study involved the following tasks: (1) mathematical modeling of installed ventilation and abnormal exhaust pathways from the cells and prediction of tornado-induced airflows in these pathways; (2) mathematical modeling of individual cell flow characteristics and prediction of in-cell velocities induced by flows from step 1; and (3) evaluation of the results of steps 1 and 2 to determine whether any of the pathwaysmore » investigated have the potential for releasing quantities of radioactively contaminated air from the main process cells. The study has concluded that in the event of a tornado strike, certain pathways from the cells have the potential to release radioactive materials of the atmosphere. Determination of the quantities of radioactive material released from the cells through pathways identified in step 3 is presented in Part II of this report.« less
NASA Astrophysics Data System (ADS)
Nevitt, Johanna M.; Pollard, David D.; Warren, Jessica M.
2014-03-01
Rock deformation often is investigated using kinematic and/or mechanical models. Here we provide a direct comparison of these modeling techniques in the context of a deformed dike within a meter-scale contractional fault step. The kinematic models consider two possible shear plane orientations and various modes of deformation (simple shear, transtension, transpression), while the mechanical model uses the finite element method and assumes elastoplastic constitutive behavior. The results for the kinematic and mechanical models are directly compared using the modeled maximum and minimum principal stretches. The kinematic analysis indicates that the contractional step may be classified as either transtensional or transpressional depending on the modeled shear plane orientation, suggesting that these terms may be inappropriate descriptors of step-related deformation. While the kinematic models do an acceptable job of depicting the change in dike shape and orientation, they are restricted to a prescribed homogeneous deformation. In contrast, the mechanical model allows for heterogeneous deformation within the step to accurately represent the deformation. The ability to characterize heterogeneous deformation and include fault slip - not as a prescription, but as a solution to the governing equations of motion - represents a significant advantage of the mechanical model over the kinematic models.
Statistical models for detecting differential chromatin interactions mediated by a protein.
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM).
Statistical Models for Detecting Differential Chromatin Interactions Mediated by a Protein
Niu, Liang; Li, Guoliang; Lin, Shili
2014-01-01
Chromatin interactions mediated by a protein of interest are of great scientific interest. Recent studies show that protein-mediated chromatin interactions can have different intensities in different types of cells or in different developmental stages of a cell. Such differences can be associated with a disease or with the development of a cell. Thus, it is of great importance to detect protein-mediated chromatin interactions with different intensities in different cells. A recent molecular technique, Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET), which uses formaldehyde cross-linking and paired-end sequencing, is able to detect genome-wide chromatin interactions mediated by a protein of interest. Here we proposed two models (One-Step Model and Two-Step Model) for two sample ChIA-PET count data (one biological replicate in each sample) to identify differential chromatin interactions mediated by a protein of interest. Both models incorporate the data dependency and the extent to which a fragment pair is related to a pair of DNA loci of interest to make accurate identifications. The One-Step Model makes use of the data more efficiently but is more computationally intensive. An extensive simulation study showed that the models can detect those differentially interacted chromatins and there is a good agreement between each classification result and the truth. Application of the method to a two-sample ChIA-PET data set illustrates its utility. The two models are implemented as an R package MDM (available at http://www.stat.osu.edu/~statgen/SOFTWARE/MDM). PMID:24835279
DNA strand displacement system running logic programs.
Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr
2014-01-01
The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Anderson, Daniel M; Benson, James D; Kearsley, Anthony J
2014-12-01
Mathematical modeling plays an enormously important role in understanding the behavior of cells, tissues, and organs undergoing cryopreservation. Uses of these models range from explanation of phenomena, exploration of potential theories of damage or success, development of equipment, and refinement of optimal cryopreservation/cryoablation strategies. Over the last half century there has been a considerable amount of work in bio-heat and mass-transport, and these models and theories have been readily and repeatedly applied to cryobiology with much success. However, there are significant gaps between experimental and theoretical results that suggest missing links in models. One source for these potential gaps is that cryobiology is at the intersection of several very challenging aspects of transport theory: it couples multi-component, moving boundary, multiphase solutions that interact through a semipermeable elastic membrane with multicomponent solutions in a second time-varying domain, during a two-hundred Kelvin temperature change with multi-molar concentration gradients and multi-atmosphere pressure changes. In order to better identify potential sources of error, and to point to future directions in modeling and experimental research, we present a three part series to build from first principles a theory of coupled heat and mass transport in cryobiological systems accounting for all of these effects. The hope of this series is that by presenting and justifying all steps, conclusions may be made about the importance of key assumptions, perhaps pointing to areas of future research or model development, but importantly, lending weight to standard simplification arguments that are often made in heat and mass transport. In this first part, we review concentration variable relationships, their impact on choices for Gibbs energy models, and their impact on chemical potentials. Copyright © 2014 Elsevier Inc. All rights reserved.
Two-step photon up-conversion solar cells
Asahi, Shigeo; Teranishi, Haruyuki; Kusaki, Kazuki; Kaizu, Toshiyuki; Kita, Takashi
2017-01-01
Reducing the transmission loss for below-gap photons is a straightforward way to break the limit of the energy-conversion efficiency of solar cells (SCs). The up-conversion of below-gap photons is very promising for generating additional photocurrent. Here we propose a two-step photon up-conversion SC with a hetero-interface comprising different bandgaps of Al0.3Ga0.7As and GaAs. The below-gap photons for Al0.3Ga0.7As excite GaAs and generate electrons at the hetero-interface. The accumulated electrons at the hetero-interface are pumped upwards into the Al0.3Ga0.7As barrier by below-gap photons for GaAs. Efficient two-step photon up-conversion is achieved by introducing InAs quantum dots at the hetero-interface. We observe not only a dramatic increase in the additional photocurrent, which exceeds the reported values by approximately two orders of magnitude, but also an increase in the photovoltage. These results suggest that the two-step photon up-conversion SC has a high potential for implementation in the next-generation high-efficiency SCs. PMID:28382945
Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Jasche, Jens
2016-01-01
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
NASA Astrophysics Data System (ADS)
Faure, Bastien
The neutronic calculation of a reactor's core is usually done in two steps. After solving the neutron transport equation over an elementary domain of the core, a set of parameters, namely macroscopic cross sections and potentially diffusion coefficients, are defined in order to perform a full core calculation. In the first step, the cell or assembly is calculated using the "fundamental mode theory", the pattern being inserted in an infinite lattice of periodic structures. This simple representation allows a precise modeling for the geometry and the energy variable and can be treated within transport theory with minimalist approximations. However, it supposes that the reactor's core can be treated as a periodic lattice of elementary domains, which is already a big hypothesis, and cannot, at first sight, take into account neutron leakage between two different zones and out of the core. The leakage models propose to correct the transport equation with an additional leakage term in order to represent this phenomenon. For historical reasons, numerical methods for solving the transport equation being limited by computer's features (processor speeds and memory sizes), the leakage term is, in most cases, modeled by a homogeneous and isotropic probability within a "homogeneous leakage model". Driven by technological innovation in the computer science field, "heterogeneous leakage models" have been developed and implemented in several neutron transport calculation codes. This work focuses on a study of some of those models, including the TIBERE model from the DRAGON-3 code developed at Ecole Polytechnique de Montreal, as well as the heterogeneous model from the APOLLO-3 code developed at Commissariat a l'Energie Atomique et aux energies alternatives. The research based on sodium cooled fast reactors and light water reactors has allowed us to demonstrate the interest of those models compared to a homogeneous leakage model. In particular, it has been shown that a heterogeneous model has a significant impact on the calculation of the out of core leakage rate that permits a better estimation of the transport equation eigenvalue Keff . The neutron streaming between two zones of different compositions was also proven to be better calculated.
NASA Technical Reports Server (NTRS)
Parkinson, J B; HOUSE R O
1938-01-01
Tests were made in the NACA tank and in the NACA 7 by 10 foot wind tunnel on two models of transverse step floats and three models of pointed step floats considered to be suitable for use with single float seaplanes. The object of the program was the reduction of water resistance and spray of single float seaplanes without reducing the angle of dead rise believed to be necessary for the satisfactory absorption of the shock loads. The results indicated that all the models have less resistance and spray than the model of the Mark V float and that the pointed step floats are somewhat superior to the transverse step floats in these respects. Models 41-D, 61-A, and 73 were tested by the general method over a wide range of loads and speeds. The results are presented in the form of curves and charts for use in design calculations.
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Numerical simulation of the flow field and fuel sprays in an IC engine
NASA Technical Reports Server (NTRS)
Nguyen, H. L.; Schock, H. J.; Ramos, J. I.; Carpenter, M. H.; Stegeman, J. D.
1987-01-01
A two-dimensional model for axisymmetric piston-cylinder configurations is developed to study the flow field in two-stroke direct-injection Diesel engines under motored conditions. The model accounts for turbulence by a two-equation model for the turbulence kinetic energy and its rate of dissipation. A discrete droplet model is used to simulate the fuel spray, and the effects of the gas phase turbulence on the droplets is considered. It is shown that a fluctuating velocity can be added to the mean droplet velocity every time step if the step is small enough. Good agreement with experimental data is found for a range of ambient pressures in Diesel engine-type microenvironments. The effects of the intake swirl angle in the spray penetration, vaporization, and mixing in a uniflow-scavenged two-stroke Diesel engine are analyzed. It is found that the swirl increases the gas phase turbulence levels and the rates of vaporization.
Chi, Felicia W; Sterling, Stacy; Campbell, Cynthia I; Weisner, Constance
2013-01-01
This study examines the associations between 12-step participation and outcomes over 7 years among 419 adolescent substance use patients with and without psychiatric comorbidities. Although level of participation decreased over time for both groups, comorbid adolescents participated in 12-step groups at comparable or higher levels across time points. Results from mixed-effects logistic regression models indicated that for both groups, 12-step participation was associated with both alcohol and drug abstinence at follow-ups, increasing the likelihood of either by at least 3 times. Findings highlight the potential benefits of 12-step participation in maintaining long-term recovery for adolescents with and without psychiatric disorders.
NASA Astrophysics Data System (ADS)
Cui, Z.; Welty, C.; Maxwell, R. M.
2011-12-01
Lagrangian, particle-tracking models are commonly used to simulate solute advection and dispersion in aquifers. They are computationally efficient and suffer from much less numerical dispersion than grid-based techniques, especially in heterogeneous and advectively-dominated systems. Although particle-tracking models are capable of simulating geochemical reactions, these reactions are often simplified to first-order decay and/or linear, first-order kinetics. Nitrogen transport and transformation in aquifers involves both biodegradation and higher-order geochemical reactions. In order to take advantage of the particle-tracking approach, we have enhanced an existing particle-tracking code SLIM-FAST, to simulate nitrogen transport and transformation in aquifers. The approach we are taking is a hybrid one: the reactive multispecies transport process is operator split into two steps: (1) the physical movement of the particles including the attachment/detachment to solid surfaces, which is modeled by a Lagrangian random-walk algorithm; and (2) multispecies reactions including biodegradation are modeled by coupling multiple Monod equations with other geochemical reactions. The coupled reaction system is solved by an ordinary differential equation solver. In order to solve the coupled system of equations, after step 1, the particles are converted to grid-based concentrations based on the mass and position of the particles, and after step 2 the newly calculated concentration values are mapped back to particles. The enhanced particle-tracking code is capable of simulating subsurface nitrogen transport and transformation in a three-dimensional domain with variably saturated conditions. Potential application of the enhanced code is to simulate subsurface nitrogen loading to the Chesapeake Bay and its tributaries. Implementation details, verification results of the enhanced code with one-dimensional analytical solutions and other existing numerical models will be presented in addition to a discussion of implementation challenges.
One-step fabrication of multifunctional micromotors
NASA Astrophysics Data System (ADS)
Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y.
2015-08-01
Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications. Electronic supplementary information (ESI) available: Videos S1-S4 and Fig. S1-S3. See DOI: 10.1039/c5nr03574k
Effective robotic assistive pattern of treadmill training for spinal cord injury in a rat model
Zhao, Bo-Lun; Li, Wen-Tao; Zhou, Xiao-Hua; Wu, Su-Qian; Cao, Hong-Shi; Bao, Zhu-Ren; An, Li-Bin
2018-01-01
The purpose of the present study was to establish an effective robotic assistive stepping pattern of body-weight-supported treadmill training based on a rat spinal cord injury (SCI) model and assess the effect by comparing this with another frequently used assistive stepping pattern. The recorded stepping patterns of both hind limbs of trained intact rats were edited to establish a 30-sec playback normal rat stepping pattern (NRSP). Step features (step length, step height, step number and swing duration), BBB scores, latencies, and amplitudes of the transcranial electrical motor-evoked potentials (tceMEPs) and neurofilament 200 (NF200) expression in the spinal cord lesion area during and after 3 weeks of body-weight-supported treadmill training (BWSTT) were compared in rats with spinal contusion receiving NRSP assistance (NRSPA) and those that received manual assistance (MA). Hind limb stepping performance among rats receiving NRSPA during BWSTT was greater than that among rats receiving MA in terms of longer step length, taller step height, and longer swing duration. Furthermore a higher BBB score was also indicated. The rats in the NRSPA group achieved superior results in the tceMEPs assessment and greater NF200 expression in the spinal cord lesion area compared with the rats in the MA group. These findings suggest NRSPA was an effective assistive pattern of treadmill training compared with MA based on the rat SCI model and this approach could be used as a new platform for animal experiments for better understanding the mechanisms of SCI rehabilitation. PMID:29545846
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
Farazdaghi, Hadi
2011-02-01
Photosynthesis is the origin of oxygenic life on the planet, and its models are the core of all models of plant biology, agriculture, environmental quality and global climate change. A theory is presented here, based on single process biochemical reactions of Rubisco, recognizing that: In the light, Rubisco activase helps separate Rubisco from the stored ribulose-1,5-bisphosphate (RuBP), activates Rubisco with carbamylation and addition of Mg²(+), and then produces two products, in two steps: (Step 1) Reaction of Rubisco with RuBP produces a Rubisco-enediol complex, which is the carboxylase-oxygenase enzyme (Enco) and (Step 2) Enco captures CO₂ and/or O₂ and produces intermediate products leading to production and release of 3-phosphoglycerate (PGA) and Rubisco. PGA interactively controls (1) the carboxylation-oxygenation, (2) electron transport, and (3) triosephosphate pathway of the Calvin-Benson cycle that leads to the release of glucose and regeneration of RuBP. Initially, the total enzyme participates in the two steps of the reaction transitionally and its rate follows Michaelis-Menten kinetics. But, for a continuous steady state, Rubisco must be divided into two concurrently active segments for the two steps. This causes a deviation of the steady state from the transitional rate. Kinetic models are developed that integrate the transitional and the steady state reactions. They are tested and successfully validated with verifiable experimental data. The single-process theory is compared to the widely used two-process theory of Farquhar et al. (1980. Planta 149, 78-90), which assumes that the carboxylation rate is either Rubisco-limited at low CO₂ levels such as CO₂ compensation point, or RuBP regeneration-limited at high CO₂. Since the photosynthesis rate cannot increase beyond the two-process theory's Rubisco limit at the CO₂ compensation point, net photosynthesis cannot increase above zero in daylight, and since there is always respiration at night, it leads to progressively negative daily CO₂ fixation with no possibility of oxygenic life on the planet. The Rubisco-limited theory at low CO₂ also contradicts all experimental evidence for low substrate reactions, and for all known enzymes, Rubisco included. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Accuracy of professional sports drafts in predicting career potential.
Koz, D; Fraser-Thomas, J; Baker, J
2012-08-01
The forecasting of talented players is a crucial aspect of building a successful sports franchise and professional sports invest significant resources in making player choices in sport drafts. The current study examined the relationship between career performance (i.e. games played) and draft round for the National Football League, National Hockey League, National Basketball League, and Major League Baseball for players drafted from 1980 to 1989 (n = 4874) against the assumption of a linear relationship between performance and draft round (i.e. that players with the most potential will be selected before players of lower potential). A two-step analysis revealed significant differences in games played across draft rounds (step 1) and a significant negative relationship between draft round and games played (step 2); however, the amount of variance accounted for was relatively low (less than 17%). Results highlight the challenges of accurately evaluating amateur talent. © 2011 John Wiley & Sons A/S.
NASA Astrophysics Data System (ADS)
Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten
2018-02-01
The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.
NASA Astrophysics Data System (ADS)
Szalaiová, Eva; Rabbel, Wolfgang; Marquart, Gabriele; Vogt, Christian
2015-11-01
The area of the 9.1-km-deep Continental Deep Drillhole (KTB) in Germany is used as a case study for a geothermal reservoir situated in folded and faulted metamorphic crystalline crust. The presented approach is based on the analysis of 3-D seismic reflection data combined with borehole data and hydrothermal numerical modelling. The KTB location exemplarily contains all elements that make seismic prospecting in crystalline environment often more difficult than in sedimentary units, basically complicated tectonics and fracturing and low-coherent strata. In a first step major rock units including two known nearly parallel fault zones are identified down to a depth of 12 km. These units form the basis of a gridded 3-D numerical model for investigating temperature and fluid flow. Conductive and advective heat transport takes place mainly in a metamorphic block composed of gneisses and metabasites that show considerable differences in thermal conductivity and heat production. Therefore, in a second step, the structure of this unit is investigated by seismic waveform modelling. The third step of interpretation consists of applying wavenumber filtering and log-Gabor-filtering for locating fractures. Since fracture networks are the major fluid pathways in the crystalline, we associate the fracture density distribution with distributions of relative porosity and permeability that can be calibrated by logging data and forward modelling of the temperature field. The resulting permeability distribution shows values between 10-16 and 10-19 m2 and does not correlate with particular rock units. Once thermohydraulic rock properties are attributed to the numerical model, the differential equations for heat and fluid transport in porous media are solved numerically based on a finite difference approach. The hydraulic potential caused by topography and a heat flux of 54 mW m-2 were applied as boundary conditions at the top and bottom of the model. Fluid flow is generally slow and mainly occurring within the two fault zones. Thus, our model confirms the previous finding that diffusive heat transport is the dominant process at the KTB site. Fitting the observed temperature-depth profile requires a correction for palaeoclimate of about 4 K at 1 km depth. Modelled and observed temperature data fit well within 0.2 °C bounds. Whereas thermal conditions are suitable for geothermal energy production, hydraulic conditions are unfavourable without engineered stimulation.
Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.
Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine
2017-12-01
Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.
Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V
2016-05-01
This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.
Fabrication of porous anodic alumina using normal anodization and pulse anodization
NASA Astrophysics Data System (ADS)
Chin, I. K.; Yam, F. K.; Hassan, Z.
2015-05-01
This article reports on the fabrication of porous anodic alumina (PAA) by two-step anodizing the low purity commercial aluminum sheets at room temperature. Different variations of the second-step anodization were conducted: normal anodization (NA) with direct current potential difference; pulse anodization (PA) alternate between potential differences of 10 V and 0 V; hybrid pulse anodization (HPA) alternate between potential differences of 10 V and -2 V. The method influenced the film homogeneity of the PAA and the most homogeneous structure was obtained via PA. The morphological properties are further elucidated using measured current-transient profiles. The absent of current rise profile in PA indicates the anodization temperature and dissolution of the PAA structure were greatly reduced by alternating potential differences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammoun, S.; Brassart, L.; Doghri, I.
A micromechanical damage modeling approach is presented to predict the overall elasto-plastic behavior and damage evolution in short fiber reinforced composite materials. The practical use of the approach is for injection molded thermoplastic parts reinforced with short glass fibers. The modeling is proceeded as follows. The representative volume element is decomposed into a set of pseudograins, the damage of which affects progressively the overall stiffness and strength up to total failure. Each pseudograin is a two-phase composite with aligned inclusions having same aspect ratio. A two-step mean-field homogenization procedure is adopted. In the first step, the pseudograins are homogenized individuallymore » according to the Mori-Tanaka scheme. The second step consists in a self-consistent homogenization of homogenized pseudograins. An isotropic damage model is applied at the pseudograin level. The model is implemented as a UMAT in the finite element code ABAQUS. Model is shown to reproduce the strength and the anisotropy (Lankford coefficient) during uniaxial tensile tests on samples cut under different directions relative to the injection flow direction.« less
NASA Astrophysics Data System (ADS)
Kammoun, S.; Brassart, L.; Robert, G.; Doghri, I.; Delannay, L.
2011-05-01
A micromechanical damage modeling approach is presented to predict the overall elasto-plastic behavior and damage evolution in short fiber reinforced composite materials. The practical use of the approach is for injection molded thermoplastic parts reinforced with short glass fibers. The modeling is proceeded as follows. The representative volume element is decomposed into a set of pseudograins, the damage of which affects progressively the overall stiffness and strength up to total failure. Each pseudograin is a two-phase composite with aligned inclusions having same aspect ratio. A two-step mean-field homogenization procedure is adopted. In the first step, the pseudograins are homogenized individually according to the Mori-Tanaka scheme. The second step consists in a self-consistent homogenization of homogenized pseudograins. An isotropic damage model is applied at the pseudograin level. The model is implemented as a UMAT in the finite element code ABAQUS. Model is shown to reproduce the strength and the anisotropy (Lankford coefficient) during uniaxial tensile tests on samples cut under different directions relative to the injection flow direction.
Physiological Response of Plants Grown on Porous Ceramic Tubes
NASA Technical Reports Server (NTRS)
Tsao, David; Okos, Martin
1997-01-01
This research involves the manipulation of the root-zone water potential for the purposes of discriminating the rate limiting step in the inorganic nutrient uptake mechanism utilized by higher plants. This reaction sequence includes the pathways controlled by the root-zone conditions such as water tension and gradient concentrations. Furthermore, plant based control mechanisms dictated by various protein productions are differentiated as well. For the nutrients limited by the environmental availability, the kinetics were modeled using convection and diffusion equations. Alternatively, for the nutrients dependent upon enzyme manipulations, the uptakes are modeled using Michaelis-Menten kinetics. In order to differentiate between these various mechanistic steps, an experimental apparatus known as the Porous Ceramic Tube - Nutrient Delivery System (PCT-NDS) was used. Manipulation of the applied suction pressure circulating a nutrient solution through this system imposes a change in the matric component of the water potential. This compensates for the different osmotic components of water potential dictated by nutrient concentration. By maintaining this control over the root-zone conditions, the rate limiting steps in the uptake of the essential nutrients into tomato plants (Lycopersicon esculentum cv. Cherry Elite) were differentiated. Results showed that the uptake of some nutrients were mass transfer limited while others were limited by the enzyme kinetics. Each of these were adequately modeled with calculations and discussions of the parameter estimations provided.
Force transients and minimum cross-bridge models in muscular contraction
Halvorson, Herbert R.
2010-01-01
Two- and three-state cross-bridge models are considered and examined with respect to their ability to predict three distinct phases of the force transients that occur in response to step change in muscle fiber length. Particular attention is paid to satisfying the Le Châtelier–Brown Principle. This analysis shows that the two-state model can account for phases 1 and 2 of a force transient, but is barely adequate to account for phase 3 (delayed force) unless a stretch results in a sudden increase in the number of cross-bridges in the detached state. The three-state model (A → B → C → A) makes it possible to account for all three phases if we assume that the A → B transition is fast (corresponding to phase 2), the B → C transition is of intermediate speed (corresponding to phase 3), and the C → A transition is slow; in such a scenario, states A and C can support or generate force (high force states) but state B cannot (detached, or low-force state). This model involves at least one ratchet mechanism. In this model, force can be generated by either of two transitions: B → A or B → C. To determine which of these is the major force-generating step that consumes ATP and transduces energy, we examine the effects of ATP, ADP, and phosphate (Pi) on force transients. In doing so, we demonstrate that the fast transition (phase 2) is associated with the nucleotide-binding step, and that the intermediate-speed transition (phase 3) is associated with the Pi-release step. To account for all the effects of ligands, it is necessary to expand the three-state model into a six-state model that includes three ligand-bound states. The slowest phase of a force transient (phase 4) cannot be explained by any of the models described unless an additional mechanism is introduced. Here we suggest a role of series compliance to account for this phase, and propose a model that correlates the slowest step of the cross-bridge cycle (transition C → A) to: phase 4 of step analysis, the rate constant ktr of the quick-release and restretch experiment, and the rate constant kact for force development time course following Ca2+ activation. PMID:18425593
Force transients and minimum cross-bridge models in muscular contraction.
Kawai, Masataka; Halvorson, Herbert R
2007-01-01
Two- and three-state cross-bridge models are considered and examined with respect to their ability to predict three distinct phases of the force transients that occur in response to step change in muscle fiber length. Particular attention is paid to satisfying the Le Châtelier-Brown Principle. This analysis shows that the two-state model can account for phases 1 and 2 of a force transient, but is barely adequate to account for phase 3 (delayed force) unless a stretch results in a sudden increase in the number of cross-bridges in the detached state. The three-state model (A-->B-->C-->A) makes it possible to account for all three phases if we assume that the A-->B transition is fast (corresponding to phase 2), the B-->A transition is of intermediate speed (corresponding to phase 3), and the C-->A transition is slow; in such a scenario, states A and C can support or generate force (high force states) but state B cannot (detached, or low-force state). This model involves at least one ratchet mechanism. In this model, force can be generated by either of two transitions: B-->A or B-->C. To determine which of these is the major force-generating step that consumes ATP and transduces energy, we examine the effects of ATP, ADP, and phosphate (Pi) on force transients. In doing so, we demonstrate that the fast transition (phase 2) is associated with the nucleotide-binding step, and that the intermediate-speed transition (phase 3) is associated with the Pi-release step. To account for all the effects of ligands, it is necessary to expand the three-state model into a six-state model that includes three ligand-bound states. The slowest phase of a force transient (phase 4) cannot be explained by any of the models described unless an additional mechanism is introduced. Here we suggest a role of series compliance to account for this phase, and propose a model that correlates the slowest step of the cross-bridge cycle (transition C-->A) to: phase 4 of step analysis, the rate constant k(tr) of the quick-release and restretch experiment, and the rate constant k(act) for force development time course following Ca(2+) activation.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
Debono, Deborah; Taylor, Natalie; Lipworth, Wendy; Greenfield, David; Travaglia, Joanne; Black, Deborah; Braithwaite, Jeffrey
2017-03-27
Medication errors harm hospitalised patients and increase health care costs. Electronic Medication Management Systems (EMMS) have been shown to reduce medication errors. However, nurses do not always use EMMS as intended, largely because implementation of such patient safety strategies requires clinicians to change their existing practices, routines and behaviour. This study uses the Theoretical Domains Framework (TDF) to identify barriers and targeted interventions to enhance nurses' appropriate use of EMMS in two Australian hospitals. This qualitative study draws on in-depth interviews with 19 acute care nurses who used EMMS. A convenience sampling approach was used. Nurses working on the study units (N = 6) in two hospitals were invited to participate if available during the data collection period. Interviews inductively explored nurses' experiences of using EMMS (step 1). Data were analysed using the TDF to identify theory-derived barriers to nurses' appropriate use of EMMS (step 2). Relevant behaviour change techniques (BCTs) were identified to overcome key barriers to using EMMS (step 3) followed by the identification of potential literature-informed targeted intervention strategies to operationalise the identified BCTs (step 4). Barriers to nurses' use of EMMS in acute care were represented by nine domains of the TDF. Two closely linked domains emerged as major barriers to EMMS use: Environmental Context and Resources (availability and properties of computers on wheels (COWs); technology characteristics; specific contexts; competing demands and time pressure) and Social/Professional Role and Identity (conflict between using EMMS appropriately and executing behaviours critical to nurses' professional role and identity). The study identified three potential BCTs to address the Environmental Context and Resources domain barrier: adding objects to the environment; restructuring the physical environment; and prompts and cues. Seven BCTs to address Social/Professional Role and Identity were identified: social process of encouragement; pressure or support; information about others' approval; incompatible beliefs; identification of self as role model; framing/reframing; social comparison; and demonstration of behaviour. It proposes several targeted interventions to deliver these BCTs. The TDF provides a useful approach to identify barriers to nurses' prescribed use of EMMS, and can inform the design of targeted theory-based interventions to improve EMMS implementation.
A model for predicting Xanthomonas arboricola pv. pruni growth as a function of temperature
Llorente, Isidre; Montesinos, Emilio; Moragrega, Concepció
2017-01-01
A two-step modeling approach was used for predicting the effect of temperature on the growth of Xanthomonas arboricola pv. pruni, causal agent of bacterial spot disease of stone fruit. The in vitro growth of seven strains was monitored at temperatures from 5 to 35°C with a Bioscreen C system, and a calibrating equation was generated for converting optical densities to viable counts. In primary modeling, Baranyi, Buchanan, and modified Gompertz equations were fitted to viable count growth curves over the entire temperature range. The modified Gompertz model showed the best fit to the data, and it was selected to estimate the bacterial growth parameters at each temperature. Secondary modeling of maximum specific growth rate as a function of temperature was performed by using the Ratkowsky model and its variations. The modified Ratkowsky model showed the best goodness of fit to maximum specific growth rate estimates, and it was validated successfully for the seven strains at four additional temperatures. The model generated in this work will be used for predicting temperature-based Xanthomonas arboricola pv. pruni growth rate and derived potential daily doublings, and included as the inoculum potential component of a bacterial spot of stone fruit disease forecaster. PMID:28493954
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong
2017-01-01
A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707
Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui
2017-01-01
A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.
INTEGRATING REPRESENTATION AND VULNERABILITY: TWO APPROACHES FOR PRIORITIZING AREAS FOR CONSERVATION
One fundamental step in conservation planning involves determining where to concentrate efforts to protect conservation targets. Here we demonstrate two approaches to prioritizing areas based on both species composition and potential threats facing the species. The first approa...
Refined BCF-type boundary conditions for mesoscale surface step dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Renjie; Ackerman, David M.; Evans, James W.
Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less
Refined BCF-type boundary conditions for mesoscale surface step dynamics
Zhao, Renjie; Ackerman, David M.; Evans, James W.
2015-06-24
Deposition on a vicinal surface with alternating rough and smooth steps is described by a solid-on-solid model with anisotropic interactions. Kinetic Monte Carlo (KMC) simulations of the model reveal step pairing in the absence of any additional step attachment barriers. We explore the description of this behavior within an analytic Burton-Cabrera-Frank (BCF)-type step dynamics treatment. Without attachment barriers, conventional kinetic coefficients for the rough and smooth steps are identical, as are the predicted step velocities for a vicinal surface with equal terrace widths. However, we determine refined kinetic coefficients from a two-dimensional discrete deposition-diffusion equation formalism which accounts for stepmore » structure. These coefficients are generally higher for rough steps than for smooth steps, reflecting a higher propensity for capture of diffusing terrace adatoms due to a higher kink density. Such refined coefficients also depend on the local environment of the step and can even become negative (corresponding to net detachment despite an excess adatom density) for a smooth step in close proximity to a rough step. Incorporation of these refined kinetic coefficients into a BCF-type step dynamics treatment recovers quantitatively the mesoscale step-pairing behavior observed in the KMC simulations.« less
A Novel Protocol for Model Calibration in Biological Wastewater Treatment
Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen
2015-01-01
Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959
The SMM Model as a Boundary Value Problem Using the Discrete Diffusion Equation
NASA Technical Reports Server (NTRS)
Campbell, Joel
2007-01-01
A generalized single step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
The SMM model as a boundary value problem using the discrete diffusion equation.
Campbell, Joel
2007-12-01
A generalized single-step stepwise mutation model (SMM) is developed that takes into account an arbitrary initial state to a certain partial difference equation. This is solved in both the approximate continuum limit and the more exact discrete form. A time evolution model is developed for Y DNA or mtDNA that takes into account the reflective boundary modeling minimum microsatellite length and the original difference equation. A comparison is made between the more widely known continuum Gaussian model and a discrete model, which is based on modified Bessel functions of the first kind. A correction is made to the SMM model for the probability that two individuals are related that takes into account a reflecting boundary modeling minimum microsatellite length. This method is generalized to take into account the general n-step model and exact solutions are found. A new model is proposed for the step distribution.
Revisiting the Rossby Haurwitz wave test case with contour advection
NASA Astrophysics Data System (ADS)
Smith, Robert K.; Dritschel, David G.
2006-09-01
This paper re-examines a basic test case used for spherical shallow-water numerical models, and underscores the need for accurate, high resolution models of atmospheric and ocean dynamics. The Rossby-Haurwitz test case, first proposed by Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow-water equations on the sphere, J. Comput. Phys. (1992) 221-224], has been examined using a wide variety of shallow-water models in previous papers. Here, two contour-advective semi-Lagrangian (CASL) models are considered, and results are compared with previous test results. We go further by modifying this test case in a simple way to initiate a rapid breakdown of the basic wave state. This breakdown is accompanied by the formation of sharp potential vorticity gradients (fronts), placing far greater demands on the numerics than the original test case does. We also go further by examining other dynamical fields besides the height and potential vorticity, to assess how well the models deal with gravity waves. Such waves are sensitive to the presence or not of sharp potential vorticity gradients, as well as to numerical parameter settings. In particular, large time steps (convenient for semi-Lagrangian schemes) can seriously affect gravity waves but can also have an adverse impact on the primary fields of height and velocity. These problems are exacerbated by a poor resolution of potential vorticity gradients.
Updated Panel-Method Computer Program
NASA Technical Reports Server (NTRS)
Ashby, Dale L.
1995-01-01
Panel code PMARC_12 (Panel Method Ames Research Center, version 12) computes potential-flow fields around complex three-dimensional bodies such as complete aircraft models. Contains several advanced features, including internal mathematical modeling of flow, time-stepping wake model for simulating either steady or unsteady motions, capability for Trefftz computation of drag induced by plane, and capability for computation of off-body and on-body streamlines, and capability of computation of boundary-layer parameters by use of two-dimensional integral boundary-layer method along surface streamlines. Investigators interested in visual representations of phenomena, may want to consider obtaining program GVS (ARC-13361), General visualization System. GVS is Silicon Graphics IRIS program created to support scientific-visualization needs of PMARC_12. GVS available separately from COSMIC. PMARC_12 written in standard FORTRAN 77, with exception of NAMELIST extension used for input.
Multigrid solution of compressible turbulent flow on unstructured meshes using a two-equation model
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Martinelli, L.
1991-01-01
The system of equations consisting of the full Navier-Stokes equations and two turbulence equations was solved for in the steady state using a multigrid strategy on unstructured meshes. The flow equations and turbulence equations are solved in a loosely coupled manner. The flow equations are advanced in time using a multistage Runge-Kutta time stepping scheme with a stability bound local time step, while the turbulence equations are advanced in a point-implicit scheme with a time step which guarantees stability and positively. Low Reynolds number modifications to the original two equation model are incorporated in a manner which results in well behaved equations for arbitrarily small wall distances. A variety of aerodynamic flows are solved for, initializing all quantities with uniform freestream values, and resulting in rapid and uniform convergence rates for the flow and turbulence equations.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.
2001-03-01
Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).
Fine structure of the entanglement entropy in the O(2) model.
Yang, Li-Ping; Liu, Yuzhi; Zou, Haiyuan; Xie, Z Y; Meurice, Y
2016-01-01
We compare two calculations of the particle density in the superfluid phase of the O(2) model with a chemical potential μ in 1+1 dimensions. The first relies on exact blocking formulas from the Tensor Renormalization Group (TRG) formulation of the transfer matrix. The second is a worm algorithm. We show that the particle number distributions obtained with the two methods agree well. We use the TRG method to calculate the thermal entropy and the entanglement entropy. We describe the particle density, the two entropies and the topology of the world lines as we increase μ to go across the superfluid phase between the first two Mott insulating phases. For a sufficiently large temporal size, this process reveals an interesting fine structure: the average particle number and the winding number of most of the world lines in the Euclidean time direction increase by one unit at a time. At each step, the thermal entropy develops a peak and the entanglement entropy increases until we reach half-filling and then decreases in a way that approximately mirrors the ascent. This suggests an approximate fermionic picture.
Asadi, Sakine; Nojavan, Saeed
2016-06-07
In the present work, acidic and basic drugs were simultaneously extracted by a novel method of high efficiency herein referred to as two-step voltage dual electromembrane extraction (TSV-DEME). Optimizing effective parameters such as composition of organic liquid membrane, pH values of donor and acceptor solutions, voltage and duration of each step, the method had its figures of merit investigated in pure water, human plasma, wastewater, and breast milk samples. Simultaneous extraction of acidic and basic drugs was done by applying potentials of 150 V and 400 V for 6 min and 19 min as the first and second steps, respectively. The model compounds were extracted from 4 mL of sample solution (pH = 6) into 20 μL of each acceptor solution (32 mM NaOH for acidic drugs and 32 mM HCL for basic drugs). 1-Octanol was immobilized within the pores of a porous hollow fiber of polypropylene, as the supported liquid membrane (SLM) for acidic drugs, and 2-ethyle hexanol, as the SLM for basic drugs. The proposed TSV-DEME technique provided good linearity with the resulting correlation coefficients ranging from 0.993 to 0.998 over a concentration range of 1-1000 ng mL(-1). The limit of detections of the drugs were found to range within 0.3-1.5 ng mL(-1), while the corresponding repeatability ranged from 7.7 to 15.5% (n = 4). The proposed method was further compared to simple dual electromembrane extraction (DEME), indicating significantly higher recoveries for TSV-DEME procedure (38.1-68%), as compared to those of simple DEME procedure (17.7-46%). Finally, the optimized TSV-DEME was applied to extract and quantify model compounds in breast milk, wastewater, and plasma samples. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riyadi, Eko H., E-mail: e.riyadi@bapeten.go.id
2014-09-30
Initiating event is defined as any event either internal or external to the nuclear power plants (NPPs) that perturbs the steady state operation of the plant, if operating, thereby initiating an abnormal event such as transient or loss of coolant accident (LOCA) within the NPPs. These initiating events trigger sequences of events that challenge plant control and safety systems whose failure could potentially lead to core damage or large early release. Selection for initiating events consists of two steps i.e. first step, definition of possible events, such as by evaluating a comprehensive engineering, and by constructing a top level logicmore » model. Then the second step, grouping of identified initiating event's by the safety function to be performed or combinations of systems responses. Therefore, the purpose of this paper is to discuss initiating events identification in event tree development process and to reviews other probabilistic safety assessments (PSA). The identification of initiating events also involves the past operating experience, review of other PSA, failure mode and effect analysis (FMEA), feedback from system modeling, and master logic diagram (special type of fault tree). By using the method of study for the condition of the traditional US PSA categorization in detail, could be obtained the important initiating events that are categorized into LOCA, transients and external events.« less
Johnson, Victoria A; Ronan, Kevin R; Johnston, David M; Peace, Robin
2016-11-01
A main weakness in the evaluation of disaster education programs for children is evaluators' propensity to judge program effectiveness based on changes in children's knowledge. Few studies have articulated an explicit program theory of how children's education would achieve desired outcomes and impacts related to disaster risk reduction in households and communities. This article describes the advantages of constructing program theory models for the purpose of evaluating disaster education programs for children. Following a review of some potential frameworks for program theory development, including the logic model, the program theory matrix, and the stage step model, the article provides working examples of these frameworks. The first example is the development of a program theory matrix used in an evaluation of ShakeOut, an earthquake drill practiced in two Washington State school districts. The model illustrates a theory of action; specifically, the effectiveness of school earthquake drills in preventing injuries and deaths during disasters. The second example is the development of a stage step model used for a process evaluation of What's the Plan Stan?, a voluntary teaching resource distributed to all New Zealand primary schools for curricular integration of disaster education. The model illustrates a theory of use; specifically, expanding the reach of disaster education for children through increased promotion of the resource. The process of developing the program theory models for the purpose of evaluation planning is discussed, as well as the advantages and shortcomings of the theory-based approaches. © 2015 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Bellentani, Laura; Beggi, Andrea; Bordone, Paolo; Bertoni, Andrea
2018-05-01
We present a numerical study of a multichannel electronic Mach-Zehnder interferometer, based on magnetically driven noninteracting edge states. The electron path is defined by a full-scale potential landscape on the two-dimensional electron gas at filling factor 2, assuming initially only the first Landau level as filled. We tailor the two beamsplitters with 50 % interchannel mixing and measure Aharonov-Bohm oscillations in the transmission probability of the second channel. We perform time-dependent simulations by solving the electron Schrödinger equation through a parallel implementation of the split-step Fourier method, and we describe the charge-carrier wave function as a Gaussian wave packet of edge states. We finally develop a simplified theoretical model to explain the features observed in the transmission probability, and we propose possible strategies to optimize gate performances.
A novel framework for feature extraction in multi-sensor action potential sorting.
Wu, Shun-Chi; Swindlehurst, A Lee; Nenadic, Zoran
2015-09-30
Extracellular recordings of multi-unit neural activity have become indispensable in neuroscience research. The analysis of the recordings begins with the detection of the action potentials (APs), followed by a classification step where each AP is associated with a given neural source. A feature extraction step is required prior to classification in order to reduce the dimensionality of the data and the impact of noise, allowing source clustering algorithms to work more efficiently. In this paper, we propose a novel framework for multi-sensor AP feature extraction based on the so-called Matched Subspace Detector (MSD), which is shown to be a natural generalization of standard single-sensor algorithms. Clustering using both simulated data and real AP recordings taken in the locust antennal lobe demonstrates that the proposed approach yields features that are discriminatory and lead to promising results. Unlike existing methods, the proposed algorithm finds joint spatio-temporal feature vectors that match the dominant subspace observed in the two-dimensional data without needs for a forward propagation model and AP templates. The proposed MSD approach provides more discriminatory features for unsupervised AP sorting applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Cascading Failures as Continuous Phase-Space Transitions
Yang, Yang; Motter, Adilson E.
2017-12-14
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Cascading Failures as Continuous Phase-Space Transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yang; Motter, Adilson E.
In network systems, a local perturbation can amplify as it propagates, potentially leading to a large-scale cascading failure. We derive a continuous model to advance our understanding of cascading failures in power-grid networks. The model accounts for both the failure of transmission lines and the desynchronization of power generators and incorporates the transient dynamics between successive steps of the cascade. In this framework, we show that a cascade event is a phase-space transition from an equilibrium state with high energy to an equilibrium state with lower energy, which can be suitably described in a closed form using a global Hamiltonian-likemore » function. From this function, we show that a perturbed system cannot always reach the equilibrium state predicted by quasi-steady-state cascade models, which would correspond to a reduced number of failures, and may instead undergo a larger cascade. We also show that, in the presence of two or more perturbations, the outcome depends strongly on the order and timing of the individual perturbations. These results offer new insights into the current understanding of cascading dynamics, with potential implications for control interventions.« less
Tracking problem solving by multivariate pattern analysis and Hidden Markov Model algorithms.
Anderson, John R
2012-03-01
Multivariate pattern analysis can be combined with Hidden Markov Model algorithms to track the second-by-second thinking as people solve complex problems. Two applications of this methodology are illustrated with a data set taken from children as they interacted with an intelligent tutoring system for algebra. The first "mind reading" application involves using fMRI activity to track what students are doing as they solve a sequence of algebra problems. The methodology achieves considerable accuracy at determining both what problem-solving step the students are taking and whether they are performing that step correctly. The second "model discovery" application involves using statistical model evaluation to determine how many substates are involved in performing a step of algebraic problem solving. This research indicates that different steps involve different numbers of substates and these substates are associated with different fluency in algebra problem solving. Copyright © 2011 Elsevier Ltd. All rights reserved.
Wright, Michael T; Parker, David R; Amrhein, Christopher
2003-10-15
Sequential extraction procedures (SEPs) have been widely used to characterize the mobility, bioavailibility, and potential toxicity of trace elements in soils and sediments. Although oft-criticized, these methods may perform best with redox-labile elements (As, Hg, Se) for which more discrete biogeochemical phases may arise from variations in oxidation number. We critically evaluated two published SEPs for Se for their specificity and precision by applying them to four discrete components in an inert silica matrix: soluble Se(VI) (selenate), Se(IV) (selenite) adsorbed onto goethite, elemental Se, and a metal selenide (FeSe; achavalite). These were extracted both individually and in a mixed model sediment. The more selective of the two procedures was modified to further improve its selectivity (SEP 2M). Both SEP 1 and SEP 2M quantitatively recovered soluble selenate but yielded incomplete recoveries of adsorbed selenite (64% and 81%, respectively). SEP 1 utilizes 0.1 M K2S2O8 to target "organically associated" Se, but this extractant also solubilized most of the elemental (64%) and iron selenide (91%) components of the model sediment. In SEP 2M, the Na2SO3 used in step III is effective in extracting elemental Se but also extracted 17% of the Se from the iron selenide, such that the elemental fraction would be overestimated should both forms coexist. Application of SEP 2M to eight wetland sediments further suggested that the Na2SO3 in step III extracts some organically associated Se, so a NaOH extraction was inserted beforehand to yield a further modification, SEP 2OH. Results using this five-step procedure suggested that the four-step SEP 2M could overestimate elemental Se by as much as 43% due to solubilization of organic Se. Although still imperfect in its selectivity, SEP 20H may be the most suitable procedure for routine, accurate fractionation of Se in soils and sediments. However, the strong oxidant (NaOCl) used in the final step cannot distinguish between refractory organic forms of Se and pyritic Se that might form under sulfur-reducing conditions.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Accident models for two-lane rural roads : segments and intersections
DOT National Transportation Integrated Search
1998-10-01
This report is a direct step for the implementation of the Accident Analysis Module in the Interactive Highway Safety Design Model (IHSDM). The Accident Analysis Module is expected to estimate the safety of two-lane rural highway characteristics for ...
Schechner, Vered; Carmeli, Yehuda; Leshno, Moshe
2017-01-01
Clostridium difficile infection (CDI) is a common and potentially fatal healthcare-associated infection. Improving diagnostic tests and infection control measures may prevent transmission. We aimed to determine, in resource-limited settings, whether it is more effective and cost-effective to allocate resources to isolation or to diagnostics. We constructed a mathematical model of CDI transmission based on hospital data (9 medical wards, 350 beds) between March 2010 and February 2013. The model consisted of three compartments: susceptible patients, asymptomatic carriers and CDI patients. We used our model results to perform a cost-effectiveness analysis, comparing four strategies that were different combinations of 2 test methods (the two-step test and uniform PCR) and 2 infection control measures (contact isolation in multiple-bed rooms or single-bed rooms/cohorting). For each strategy, we calculated the annual cost (of CDI diagnosis and isolation) for a decrease of 1 in the average daily number of CDI patients; the strategy of the two-step test and contact isolation in multiple-bed rooms was the reference strategy. Our model showed that the average number of CDI patients increased exponentially as the transmission rate increased. Improving diagnosis by adopting uniform PCR assay reduced the average number of CDI cases per day per 350 beds from 9.4 to 8.5, while improving isolation by using single-bed rooms reduced the number to about 1; the latter was cost saving. CDI can be decreased by better isolation and more sensitive laboratory methods. From the hospital perspective, improving isolation is more cost-effective than improving diagnostics.
Carmeli, Yehuda; Leshno, Moshe
2017-01-01
Background Clostridium difficile infection (CDI) is a common and potentially fatal healthcare-associated infection. Improving diagnostic tests and infection control measures may prevent transmission. We aimed to determine, in resource-limited settings, whether it is more effective and cost-effective to allocate resources to isolation or to diagnostics. Methods We constructed a mathematical model of CDI transmission based on hospital data (9 medical wards, 350 beds) between March 2010 and February 2013. The model consisted of three compartments: susceptible patients, asymptomatic carriers and CDI patients. We used our model results to perform a cost-effectiveness analysis, comparing four strategies that were different combinations of 2 test methods (the two-step test and uniform PCR) and 2 infection control measures (contact isolation in multiple-bed rooms or single-bed rooms/cohorting). For each strategy, we calculated the annual cost (of CDI diagnosis and isolation) for a decrease of 1 in the average daily number of CDI patients; the strategy of the two-step test and contact isolation in multiple-bed rooms was the reference strategy. Results Our model showed that the average number of CDI patients increased exponentially as the transmission rate increased. Improving diagnosis by adopting uniform PCR assay reduced the average number of CDI cases per day per 350 beds from 9.4 to 8.5, while improving isolation by using single-bed rooms reduced the number to about 1; the latter was cost saving. Conclusions CDI can be decreased by better isolation and more sensitive laboratory methods. From the hospital perspective, improving isolation is more cost-effective than improving diagnostics. PMID:28187144
Winters, Karl E.; Baldys, Stanley
2011-01-01
In cooperation with the City of Wichita Falls, the U.S. Geological Survey assessed channel changes on the Wichita River at Wichita Falls, Texas, and modeled historical floods to investigate possible causes and potential mitigation alternatives to higher flood stages in recent (2007 and 2008) floods. Extreme flooding occurred on the Wichita River on June 30, 2007, inundating 167 homes in Wichita Falls. Although a record flood stage was reached in June 2007, the peak discharge was much less than some historical floods at Wichita Falls. Streamflow and stage data from two gages on the Wichita River and one on Holliday Creek were used to assess the interaction of the two streams. Changes in the Wichita River channel were evaluated using historical aerial and ground photography, comparison of recent and historical cross sections, and comparison of channel roughness coefficients with those from earlier studies. The floods of 2007 and 2008 were modeled using a one-dimensional step-backwater model. Calibrated channel roughness was larger for the 2007 flood compared to the 2008 flood, and the 2007 flood peaked about 4 feet higher than the 2008 flood. Calibration of the 1941 flood yielded a channel roughness coefficient (Manning's n) of 0.030, which represents a fairly clean natural channel. The step-backwater model was also used to evaluate the following potential mitigation alternatives: (1) increasing the capacity of the bypass channel near River Road in Wichita Falls, Texas; (2) removal of obstructions near the Scott Avenue and Martin Luther King Junior Boulevard bridges in Wichita Falls, Texas; (3) widening of aggraded channel banks in the reach between Martin Luther King Junior Boulevard and River Road; and (4) reducing channel bank and overbank roughness. Reductions in water-surface elevations ranged from 0.1 foot to as much as 3.0 feet for the different mitigation alternatives. The effects of implementing a combination of different flood-mitigation alternatives were not investigated.
Long-wave model for strongly anisotropic growth of a crystal step.
Khenner, Mikhail
2013-08-01
A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.
NASA Astrophysics Data System (ADS)
Safarzade, Zohre; Fathi, Reza; Shojaei Akbarabadi, Farideh; Bolorizadeh, Mohammad A.
2018-04-01
The scattering of a completely bare ion by atoms larger than hydrogen is at least a four-body interaction, and the charge transfer channel involves a two-step process. Amongst the two-step interactions of the high-velocity single charge transfer in an anion-atom collision, there is one whose amplitude demonstrates a peak in the angular distribution of the cross sections. This peak, the so-called Thomas peak, was predicted by Thomas in a two-step interaction, classically, which could also be described through three-body quantum mechanical models. This work discusses a four-body quantum treatment of the charge transfer in ion-atom collisions, where two-step interactions illustrating a Thomas peak are emphasized. In addition, the Pauli exclusion principle is taken into account for the initial and final states as well as the operators. It will be demonstrated that there is a momentum condition for each two-step interaction to occur in a single charge transfer channel, where new classical interactions lead to the Thomas mechanism.
NASA Astrophysics Data System (ADS)
Saletti, M.; Molnar, P.; Hassan, M. A.
2017-12-01
Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.
Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction
NASA Astrophysics Data System (ADS)
Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.
1996-11-01
Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Molecular mechanism of H+ conduction in the single-file water chain of the gramicidin channel.
Pomès, Régis; Roux, Benoît
2002-05-01
The conduction of protons in the hydrogen-bonded chain of water molecules (or "proton wire") embedded in the lumen of gramicidin A is studied with molecular dynamics free energy simulations. The process may be described as a "hop-and-turn" or Grotthuss mechanism involving the chemical exchange (hop) of hydrogen nuclei between hydrogen-bonded water molecules arranged in single file in the lumen of the pore, and the subsequent reorganization (turn) of the hydrogen-bonded network. Accordingly, the conduction cycle is modeled by two complementary steps corresponding respectively to the translocation 1) of an ionic defect (H+) and 2) of a bonding defect along the hydrogen-bonded chain of water molecules in the pore interior. The molecular mechanism and the potential of mean force are analyzed for each of these two translocation steps. It is found that the mobility of protons in gramicidin A is essentially determined by the fine structure and the dynamic fluctuations of the hydrogen-bonded network. The translocation of H+ is mediated by spontaneous (thermal) fluctuations in the relative positions of oxygen atoms in the wire. In this diffusive mechanism, a shallow free-energy well slightly favors the presence of the excess proton near the middle of the channel. In the absence of H+, the water chain adopts either one of two polarized configurations, each of which corresponds to an oriented donor-acceptor hydrogen-bond pattern along the channel axis. Interconversion between these two conformations is an activated process that occurs through the sequential and directional reorientation of water molecules of the wire. The effect of hydrogen-bonding interactions between channel and water on proton translocation is analyzed from a comparison to the results obtained previously in a study of model nonpolar channels, in which such interactions were missing. Hydrogen-bond donation from water to the backbone carbonyl oxygen atoms lining the pore interior has a dual effect: it provides a coordination of water molecules well suited both to proton hydration and to high proton mobility, and it facilitates the slower reorientation or turn step of the Grotthuss mechanism by stabilizing intermediate configurations of the hydrogen-bonded network in which water molecules are in the process of flipping between their two preferred, polarized states. This mechanism offers a detailed molecular model for the rapid transport of protons in channels, in energy-transducing membrane proteins, and in enzymes.
Global phenomena from local rules: Peer-to-peer networks and crystal steps
NASA Astrophysics Data System (ADS)
Finkbiner, Amy
Even simple, deterministic rules can generate interesting behavior in dynamical systems. This dissertation examines some real world systems for which fairly simple, locally defined rules yield useful or interesting properties in the system as a whole. In particular, we study routing in peer-to-peer networks and the motion of crystal steps. Peers can vary by three orders of magnitude in their capacities to process network traffic. This heterogeneity inspires our use of "proportionate load balancing," where each peer provides resources in proportion to its individual capacity. We provide an implementation that employs small, local adjustments to bring the entire network into a global balance. Analytically and through simulations, we demonstrate the effectiveness of proportionate load balancing on two routing methods for de Bruijn graphs, introducing a new "reversed" routing method which performs better than standard forward routing in some cases. The prevalence of peer-to-peer applications prompts companies to locate the hosts participating in these networks. We explore the use of supervised machine learning to identify peer-to-peer hosts, without using application-specific information. We introduce a model for "triples," which exploits information about nearly contemporaneous flows to give a statistical picture of a host's activities. We find that triples, together with measurements of inbound vs. outbound traffic, can capture most of the behavior of peer-to-peer hosts. An understanding of crystal surface evolution is important for the development of modern nanoscale electronic devices. The most commonly studied surface features are steps, which form at low temperatures when the crystal is cut close to a plane of symmetry. Step bunching, when steps arrange into widely separated clusters of tightly packed steps, is one important step phenomenon. We analyze a discrete model for crystal steps, in which the motion of each step depends on the two steps on either side of it. We find an time-dependence term for the motion that does not appear in continuum models, and we determine an explicit dependence on step number.
Fogedby, Hans C; Metzler, Ralf; Svane, Axel
2004-08-01
We investigate by analytical means the stochastic equations of motion of a linear molecular motor model based on the concept of protein friction. Solving the coupled Langevin equations originally proposed by Mogilner et al. [Phys. Lett. A 237, 297 (1998)], and averaging over both the two-step internal conformational fluctuations and the thermal noise, we present explicit, analytical expressions for the average motion and the velocity-force relationship. Our results allow for a direct interpretation of details of this motor model which are not readily accessible from numerical solutions. In particular, we find that the model is able to predict physiologically reasonable values for the load-free motor velocity and the motor mobility.
Gradient Dynamics and Entropy Production Maximization
NASA Astrophysics Data System (ADS)
Janečka, Adam; Pavelka, Michal
2018-01-01
We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.
Hu, Hua; Vervaeke, Koen; Graham, Lyle J; Storm, Johan F
2009-11-18
Synaptic input to a neuron may undergo various filtering steps, both locally and during transmission to the soma. Using simultaneous whole-cell recordings from soma and apical dendrites from rat CA1 hippocampal pyramidal cells, and biophysically detailed modeling, we found two complementary resonance (bandpass) filters of subthreshold voltage signals. Both filters favor signals in the theta (3-12 Hz) frequency range, but have opposite location, direction, and voltage dependencies: (1) dendritic H-resonance, caused by h/HCN-channels, filters signals propagating from soma to dendrite when the membrane potential is close to rest; and (2) somatic M-resonance, caused by M/Kv7/KCNQ and persistent Na(+) (NaP) channels, filters signals propagating from dendrite to soma when the membrane potential approaches spike threshold. Hippocampal pyramidal cells participate in theta network oscillations during behavior, and we suggest that that these dual, polarized theta resonance mechanisms may convey voltage-dependent tuning of theta-mediated neural coding in the entorhinal/hippocampal system during locomotion, spatial navigation, memory, and sleep.
NASA Astrophysics Data System (ADS)
Bachmann, F.; de Oliveira, R.; Sigg, A.; Schnyder, V.; Delpero, T.; Jaehne, R.; Bergamini, A.; Michaud, V.; Ermanni, P.
2012-07-01
Emission reduction from civil aviation has been intensively addressed in the scientific community in recent years. The combined use of novel aircraft engine architectures such as open rotor engines and lightweight materials offer the potential for fuel savings, which could contribute significantly in reaching gas emissions targets, but suffer from vibration and noise issues. We investigated the potential improvement of mechanical damping of open rotor composite fan blades by comparing two integrated passive damping systems: shape memory alloy wires and piezoelectric shunt circuits. Passive damping concepts were first validated on carbon fibre reinforced epoxy composite plates and then implemented in a 1:5 model of an open rotor blade manufactured by resin transfer moulding (RTM). A two-step process was proposed for the structural integration of the damping devices into a full composite fan blade. Forced vibration measurements of the plates and blade prototypes quantified the efficiency of both approaches, and their related weight penalty.
Imaging simulation of active EO-camera
NASA Astrophysics Data System (ADS)
Pérez, José; Repasi, Endre
2018-04-01
A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.
Context-Based Urban Terrain Reconstruction from Uav-Videos for Geoinformation Applications
NASA Astrophysics Data System (ADS)
Bulatov, D.; Solbrig, P.; Gross, H.; Wernerus, P.; Repasi, E.; Heipke, C.
2011-09-01
Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M)UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models 6ndash; represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi- intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work.
Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi
2016-01-01
The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
Simplified jet fuel reaction mechanism for lean burn combustion application
NASA Technical Reports Server (NTRS)
Lee, Chi-Ming; Kundu, Krishna; Ghorashi, Bahman
1993-01-01
Successful modeling of combustion and emissions in gas turbine engine combustors requires an adequate description of the reaction mechanism. Detailed mechanisms contain a large number of chemical species participating simultaneously in many elementary kinetic steps. Current computational fluid dynamic models must include fuel vaporization, fuel-air mixing, chemical reactions, and complicated boundary geometries. A five-step Jet-A fuel mechanism which involves pyrolysis and subsequent oxidation of paraffin and aromatic compounds is presented. This mechanism is verified by comparing with Jet-A fuel ignition delay time experimental data, and species concentrations obtained from flametube experiments. This five-step mechanism appears to be better than the current one- and two-step mechanisms.
Danker, Jared F; Anderson, John R
2007-04-15
In naturalistic algebra problem solving, the cognitive processes of representation and retrieval are typically confounded, in that transformations of the equations typically require retrieval of mathematical facts. Previous work using cognitive modeling has associated activity in the prefrontal cortex with the retrieval demands of algebra problems and activity in the posterior parietal cortex with the transformational demands of algebra problems, but these regions tend to behave similarly in response to task manipulations (Anderson, J.R., Qin, Y., Sohn, M.-H., Stenger, V.A., Carter, C.S., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261; Qin, Y., Carter, C.S., Silk, E.M., Stenger, A., Fissell, K., Goode, A., Anderson, J.R., 2004. The change of brain activation patterns as children learn algebra equation solving. Proc. Natl. Acad. Sci. 101, 5686-5691). With this study we attempt to isolate activity in these two regions by using a multi-step algebra task in which transformation (parietal) is manipulated in the first step and retrieval (prefrontal) is manipulated in the second step. Counter to our initial predictions, both brain regions were differentially active during both steps. We designed two cognitive models, one encompassing our initial assumptions and one in which both processes were engaged during both steps. The first model provided a poor fit to the behavioral and neural data, while the second model fit both well. This simultaneously emphasizes the strong relationship between retrieval and representation in mathematical reasoning and demonstrates that cognitive modeling can serve as a useful tool for understanding task manipulations in neuroimaging experiments.
NASA Astrophysics Data System (ADS)
Truong, Nguyen Hoang Long; Huan Giang, Ngoc; Binh Duong, Trong
2018-03-01
This paper aims at finding practical strategies for designing sustainable high-rise apartment buildings in Ho Chi Minh City responding to varied municipal issues. Two steps are made. Step-1 identifies the critical issues of Ho Chi Minh City which are associated with high-rise apartment building projects. Step-2 finds potential and applicable strategies which are solutions for the critical issues in Step-1 with reference of seven selected assessment methods. The study finds the set of 58 strategies applicable to designing sustainable high-rise apartment buildings in Ho Chi Minh City.
Farias, Manuel J S; Cheuquepán, William; Tanaka, Auro A; Feliu, Juan M
2018-03-15
This works deals with the identification of preferential site-specific activation at a model Pt surface during a multiproduct reaction. The (110)-type steps of a Pt(332) surface were selectively marked by attaching isotope-labeled 13 CO molecules to them, and ethanol oxidation was probed by in situ Foureir transfrom infrared spectroscopy in order to precisely determine the specific sites at which CO 2 , acetic acid, and acetaldehyde were preferentially formed. The (110) steps were active for splitting the C-C bond, but unexpectedly, we provide evidence that the pathway of CO 2 formation was preferentially activated at (111) terraces, rather than at (110) steps. Acetaldehyde was formed at (111) terraces at potentials comparable to those for CO 2 formation also at (111) terraces, while the acetic acid formation pathway became active only when the (110) steps were released by the oxidation of adsorbed 13 CO, at potentials higher than for the formation of CO 2 at (111) terraces of the stepped surface.
Dutta, Amit K.; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A.; Zhang, Ada W.; Tustian, Andrew D.; Zydney, Andrew L.; Shinkazh, Oleg
2015-01-01
Recent studies using simple model systems have demonstrated that Continuous Countercurrent Tangential Chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an “after binder” to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (~0.67 g/L) and one with high titer (~6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to that obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. PMID:25747172
Dutta, Amit K; Tran, Travis; Napadensky, Boris; Teella, Achyuta; Brookhart, Gary; Ropp, Philip A; Zhang, Ada W; Tustian, Andrew D; Zydney, Andrew L; Shinkazh, Oleg
2015-11-10
Recent studies using simple model systems have demonstrated that continuous countercurrent tangential chromatography (CCTC) has the potential to overcome many of the limitations of conventional Protein A chromatography using packed columns. The objective of this work was to optimize and implement a CCTC system for monoclonal antibody purification from clarified Chinese Hamster Ovary (CHO) cell culture fluid using a commercial Protein A resin. Several improvements were introduced to the previous CCTC system including the use of retentate pumps to maintain stable resin concentrations in the flowing slurry, the elimination of a slurry holding tank to improve productivity, and the introduction of an "after binder" to the binding step to increase antibody recovery. A kinetic binding model was developed to estimate the required residence times in the multi-stage binding step to optimize yield and productivity. Data were obtained by purifying two commercial antibodies from two different manufactures, one with low titer (∼ 0.67 g/L) and one with high titer (∼ 6.9 g/L), demonstrating the versatility of the CCTC system. Host cell protein removal, antibody yields and purities were similar to those obtained with conventional column chromatography; however, the CCTC system showed much higher productivity. These results clearly demonstrate the capabilities of continuous countercurrent tangential chromatography for the commercial purification of monoclonal antibody products. Copyright © 2015 Elsevier B.V. All rights reserved.
Hodges, Susan; Stewart, Sandra Bitonti; Hotelling, Barbara; Romano, Amy
2007-01-01
A consumer advocate, two childbirth educators, and a certified nurse-midwife each provide commentary on the effectiveness of and potential uses for the Evidence Basis for the Ten Steps of Mother-Friendly Care. PMID:18523676
Khan, Md Abdul Shafeeuulla; Ganguly, Bishwajit
2012-05-01
Oximate anions are used as potential reactivating agents for OP-inhibited AChE because of they possess enhanced nucleophilic reactivity due to the α-effect. We have demonstrated the process of reactivating the VX-AChE adduct with formoximate and hydroxylamine anions by applying the DFT approach at the B3LYP/6-311 G(d,p) level of theory. The calculated results suggest that the hydroxylamine anion is more efficient than the formoximate anion at reactivating VX-inhibited AChE. The reaction of formoximate anion and the VX-AChE adduct is a three-step process, while the reaction of hydroxylamine anion with the VX-AChE adduct seems to be a two-step process. The rate-determining step in the process is the initial attack on the VX of the VX-AChE adduct by the nucleophile. The subsequent steps are exergonic in nature. The potential energy surface (PES) for the reaction of the VX-AChE adduct with hydroxylamine anion reveals that the reactivation process is facilitated by the lower free energy of activation (by a factor of 1.7 kcal mol(-1)) than that of the formoximate anion at the B3LYP/6-311 G(d,p) level of theory. The higher free energy of activation for the reverse reactivation reaction between hydroxylamine anion and the VX-serine adduct further suggests that the hydroxylamine anion is a very good antidote agent for the reactivation process. The activation barriers calculated in solvent using the polarizable continuum model (PCM) for the reactivation of the VX-AChE adduct with hydroxylamine anion were also found to be low. The calculated results suggest that V-series compounds can be more toxic than G-series compounds, which is in accord with earlier experimental observations.
Braverman, Ami; Berger, Andrea; Meiran, Nachshon
2014-07-01
According to "hierarchical" multi-step theories, response selection is preceded by a decision regarding which task rule should be executed. Other theories assume a "flat" single-step architecture in which task information and stimulus information are simultaneously considered. Using task switching, the authors independently manipulated two kinds of conflict: task conflict (with information that potentially triggers the relevant or the competing task rule/identity) and response conflict (with information that potentially triggers the relevant or the competing response code/motor response). Event related potentials indicated that the task conflict effect began before the response conflict effect and carried on in parallel with it. These results are more in line with the hierarchical view. Copyright © 2014 Elsevier Inc. All rights reserved.
Zhao, Wenle; Pauls, Keith
2015-01-01
Background Centralized outcome adjudication has been used widely in multi-center clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network’s data management center within a homegrown clinical trial management system. In this paper, the system design strategy and database structure are presented. Methods A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. Results By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer one or two days. A total of 7,336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. Conclusions A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. PMID:26464429
Smejkal, Benjamin; Agrawal, Neeraj J; Helk, Bernhard; Schulz, Henk; Giffard, Marion; Mechelke, Matthias; Ortner, Franziska; Heckmeier, Philipp; Trout, Bernhardt L; Hekmat, Dariusch
2013-09-01
The potential of process crystallization for purification of a therapeutic monoclonal IgG1 antibody was studied. The purified antibody was crystallized in non-agitated micro-batch experiments for the first time. A direct crystallization from clarified CHO cell culture harvest was inhibited by high salt concentrations. The salt concentration of the harvest was reduced by a simple pretreatment step. The crystallization process from pretreated harvest was successfully transferred to stirred tanks and scaled-up from the mL-scale to the 1 L-scale for the first time. The crystallization yield after 24 h was 88-90%. A high purity of 98.5% was reached after a single recrystallization step. A 17-fold host cell protein reduction was achieved and DNA content was reduced below the detection limit. High biological activity of the therapeutic antibody was maintained during the crystallization, dissolving, and recrystallization steps. Crystallization was also performed with impure solutions from intermediate steps of a standard monoclonal antibody purification process. It was shown that process crystallization has a strong potential to replace Protein A chromatography. Fast dissolution of the crystals was possible. Furthermore, it was shown that crystallization can be used as a concentrating step and can replace several ultra-/diafiltration steps. Molecular modeling suggested that a negative electrostatic region with interspersed exposed hydrophobic residues on the Fv domain of this antibody is responsible for the high crystallization propensity. As a result, process crystallization, following the identification of highly crystallizable antibodies using molecular modeling tools, can be recognized as an efficient, scalable, fast, and inexpensive alternative to key steps of a standard purification process for therapeutic antibodies. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ayala, Conxi; Izquierdo-Llavall, Esther; Pueyo, Emilio Luis; Rubio, Félix; Rodríguez-Pintó, Adriana; María Casas, Antonio; Oliva-Urcía, Belén; Rey-Moral, Carmen
2015-04-01
Obtaining an accurate 3D image of the geometry and physical properties of geological structures in depth is a challenge regardless the scale and the aim of the investigation. In this framework, assessing the origin of the uncertainties and reducing them is a key issue when building a 3D reconstruction of a target area. Usually, this process involves an interdisciplinary approach and also the use of different software whose inputs and outputs have to be interoperable. We have designed a new workflow for 2.5D and 3D geological and potential field modelling, especially useful in areas where no seismic data is available. The final aim is to obtain a 3D geological model, at a regional or local scale, with the smaller uncertainty as possible. Once the study area and the working scale are is decided, the first obvious step is to compile all preexisting data and to determine its uncertainties. If necessary, a survey will be carried out to acquire additional data (e.g., gravity, magnetic or petrophysical data) to have an appropriated coverage of information and rock samples. A thorough study of the petrophysical properties is made to determine the density, magnetic susceptibility and remanence that will be assigned to each lithology, together with its corresponding uncertainty. Finally, the modelling process is started, and it includes a feedback between geology and potential fields in order to progressively refine the model until it fits all the existing data. The procedure starts with the construction of balanced geological cross sections from field work, available geological maps as well as data from stratigraphic columns, boreholes, etc. These geological cross sections are exported and imported in GMSYS software to carry out the 2.5D potential field modelling. The model improves and its uncertainty is reduced through the feedback between the geologists and the geophysicists. Once the potential field anomalies are well adjusted, the cross sections are exported into 3DMove (Midland Valley) to construct a preliminary balanced 3D model. Inversion of the potential field data in GeoModeller is the final step to obtain a 3D model consistent with the input data and with the minimum possible uncertainty. Our case study is a 3D model from the Linking Zone between the Iberian Range and the Catalonian Costal ones (NE Spain, an extension of 11,325 km2). No seismic data was available, so we carried out several surveys to acquire new gravity data and rock samples to complete the data from IGME petrophysical databases. A total of 1470 samples have been used to define the physical properties for the modelled lithologies. The gravity data consists of 2902 stations. The initial model is based on the surface geology, eleven boreholes and 8 balanced geological cross sections built in the frame of this research. The final model resulted from gravimetric inversion has allowed us to define the geometry of the top of the basement as well as to identify two structures (anticlines) as potential CO2 reservoirs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katsounaros, Ioannis; Chen, Ting; Gewirth, Andrew A.
The two traditional mechanisms of the electrochemical ammonia oxidation consider only concerted proton-electron transfer elementary steps and thus they predict that the rate–potential relationship is independent of the pH on the pH-corrected RHE potential scale. In this letter we show that this is not the case: the increase of the solution pH shifts the onset of the NH 3-to-N 2 oxidation on Pt(100) to lower potentials and also leads to higher surface concentration of formed N Oad before the latter is oxidized to nitrite. Therefore, we present a new mechanism for the ammonia oxidation which incorporates a deprotonation step occurringmore » prior to the electron transfer. The deprotonation step yields a negatively charged surface-adsorbed species which is discharged in a subsequent electron transfer step before the N–N bond formation. The negatively charged species is thus a precursor for the formation of N 2 and NO. The new mechanism should be a future guide for computational studies aiming at the identification of intermediates and corresponding activation barriers for the elementary steps. As a result, ammonia oxidation is a new example of a bond-forming reaction on (100) terraces which involves decoupled proton-electron transfer.« less
Katsounaros, Ioannis; Chen, Ting; Gewirth, Andrew A.; ...
2016-01-12
The two traditional mechanisms of the electrochemical ammonia oxidation consider only concerted proton-electron transfer elementary steps and thus they predict that the rate–potential relationship is independent of the pH on the pH-corrected RHE potential scale. In this letter we show that this is not the case: the increase of the solution pH shifts the onset of the NH 3-to-N 2 oxidation on Pt(100) to lower potentials and also leads to higher surface concentration of formed N Oad before the latter is oxidized to nitrite. Therefore, we present a new mechanism for the ammonia oxidation which incorporates a deprotonation step occurringmore » prior to the electron transfer. The deprotonation step yields a negatively charged surface-adsorbed species which is discharged in a subsequent electron transfer step before the N–N bond formation. The negatively charged species is thus a precursor for the formation of N 2 and NO. The new mechanism should be a future guide for computational studies aiming at the identification of intermediates and corresponding activation barriers for the elementary steps. As a result, ammonia oxidation is a new example of a bond-forming reaction on (100) terraces which involves decoupled proton-electron transfer.« less
A correlation between extensional displacement and architecture of ionic polymer transducers
NASA Astrophysics Data System (ADS)
Akle, Barbar J.; Duncan, Andrew; Leo, Donald J.
2008-03-01
Ionic polymer transducers (IPT), sometimes referred to as artificial muscles, are known to generate a large bending strain and a moderate stress at low applied voltages (<5V). Bending actuators have limited engineering applications due to the low forcing capabilities and the need for complicated external devices to convert the bending action into rotating or linear motion desired in most devices. Recently Akle and Leo reported extensional actuation in ionic polymer transducers. In this study, extensional IPTs are characterized as a function of transducer architecture. In this study 2 actuators are built and there extensional displacement response is characterized. The transducers have similar electrodes while the middle membrane in the first is a Nafion / ionic liquid and an aluminum oxide - ionic liquid in the second. The first transducer is characterized for constant current input, voltage step input, and sweep voltage input. The model prediction is in agreement in both shape and magnitude for the constant current experiment. The values of α and β used are within the range of values reported in Akle and Leo. Both experiments and model demonstrate that there is a preferred direction of applying the potential so that the transducer will exhibit large deformations. In step response the model well predicted the negative potential and the early part of the step in the positive potential and failed to predict the displacement after approximately 180s has elapsed. The model well predicted the sweep response, and the observed 1st harmonic in the displacement further confirmed the existence of a quadratic in the charge response. Finally the aluminum oxide based transducer is characterized for a step response and compared to the Nafion based transducer. The second actuator demonstrated electromechanical extensional response faster than that in the Nafion based transducer. The Aluminum oxide based transducer is expected to provide larger forces and hence larger energy density.
Deformed shape invariance symmetry and potentials in curved space with two known eigenstates
NASA Astrophysics Data System (ADS)
Quesne, C.
2018-04-01
We consider two families of extensions of the oscillator in a d-dimensional constant-curvature space and analyze them in a deformed supersymmetric framework, wherein the starting oscillator is known to exhibit a deformed shape invariance property. We show that the first two members of each extension family are also endowed with such a property, provided some constraint conditions relating the potential parameters are satisfied, in other words they are conditionally deformed shape invariant. Since, in the second step of the construction of a partner potential hierarchy, the constraint conditions change, we impose compatibility conditions between the two sets to build potentials with known ground and first excited states. To extend such results to any members of the two families, we devise a general method wherein the first two superpotentials, the first two partner potentials, and the first two eigenstates of the starting potential are built from some generating function W+(r) [and its accompanying function W-(r)].
NASA Astrophysics Data System (ADS)
Gómez, José J. Arroyo; Zubieta, Carolina; Ferullo, Ricardo M.; García, Silvana G.
2016-02-01
The electrochemical formation of Au nanoparticles on a highly ordered pyrolytic graphite (HOPG) substrate using conventional electrochemical techniques and ex-situ AFM is reported. From the potentiostatic current transients studies, the Au electrodeposition process on HOPG surfaces was described, within the potential range considered, by a model involving instantaneous nucleation and diffusion controlled 3D growth, which was corroborated by the microscopic analysis. Initially, three-dimensional (3D) hemispherical nanoparticles distributed on surface defects (step edges) of the substrate were observed, with increasing particle size at more negative potentials. The double potential pulse technique allowed the formation of rounded deposits at low deposition potentials, which tend to form lines of nuclei aligned in defined directions leading to 3D ordered structures. By choosing suitable nucleation and growth pulses, one-dimensional (1D) deposits were possible, preferentially located on step edges of the HOPG substrate. Quantum-mechanical calculations confirmed the tendency of Au atoms to join selectively on surface defects, such as the HOPG step edges, at the early stages of Au electrodeposition.
Individual tree-diameter growth model for the Northeastern United States
Richard M. Teck; Donald E. Hilt
1991-01-01
Describes a distance-independent individual-tree diameter growth model for the Northeastern United States. Diameter growth is predicted in two steps using a two parameter, sigmoidal growth function modified by a one parameter exponential decay function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for...
On contact modelling in isogeometric analysis
NASA Astrophysics Data System (ADS)
Cardoso, R. P. R.; Adetoro, O. B.
2017-11-01
IsoGeometric Analysis (IGA) has proved to be a reliable numerical tool for the simulation of structural behaviour and fluid mechanics. The main reasons for this popularity are essentially due to: (i) the possibility of using higher order polynomials for the basis functions; (ii) the high convergence rates possible to achieve; (iii) the possibility to operate directly on CAD geometry without the need to resort to a mesh of elements. The major drawback of IGA is the non-interpolatory characteristic of the basis functions, which adds a difficulty in handling essential boundary conditions and makes it particularly challenging for contact analysis. In this work, the IGA is expanded to include frictionless contact procedures for sheet metal forming analyses. Non-Uniform Rational B-Splines (NURBS) are going to be used for the modelling of rigid tools as well as for the modelling of the deformable blank sheet. The contact methods developed are based on a two-step contact search scheme, where during the first step a global search algorithm is used for the allocation of contact knots into potential contact faces and a second (local) contact search scheme where point inversion techniques are used for the calculation of the contact penetration gap. For completeness, elastoplastic procedures are also included for a proper description of the entire IGA of sheet metal forming processes.
Regional demand forecasting and simulation model: user's manual. Task 4, final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parhizgari, A M
1978-09-25
The Department of Energy's Regional Demand Forecasting Model (RDFOR) is an econometric and simulation system designed to estimate annual fuel-sector-region specific consumption of energy for the US. Its purposes are to (1) provide the demand side of the Project Independence Evaluation System (PIES), (2) enhance our empirical insights into the structure of US energy demand, and (3) assist policymakers in their decisions on and formulations of various energy policies and/or scenarios. This report provides a self-contained user's manual for interpreting, utilizing, and implementing RDFOR simulation software packages. Chapters I and II present the theoretical structure and the simulation of RDFOR,more » respectively. Chapter III describes several potential scenarios which are (or have been) utilized in the RDFOR simulations. Chapter IV presents an overview of the complete software package utilized in simulation. Chapter V provides the detailed explanation and documentation of this package. The last chapter describes step-by-step implementation of the simulation package using the two scenarios detailed in Chapter III. The RDFOR model contains 14 fuels: gasoline, electricity, natural gas, distillate and residual fuels, liquid gases, jet fuel, coal, oil, petroleum products, asphalt, petroleum coke, metallurgical coal, and total fuels, spread over residential, commercial, industrial, and transportation sectors.« less
González-Madroño, A; Mancha, A; Rodríguez, F J; Culebras, J; de Ulibarri, J I
2012-01-01
To ratify previous validations of the CONUT nutritional screening tool by the development of two probabilistic models using the parameters included in the CONUT, to see if the CONUT´s effectiveness could be improved. It is a two step prospective study. In Step 1, 101 patients were randomly selected, and SGA and CONUT was made. With data obtained an unconditional logistic regression model was developed, and two variants of CONUT were constructed: Model 1 was made by a method of logistic regression. Model 2 was made by dividing the probabilities of undernutrition obtained in model 1 in seven regular intervals. In step 2, 60 patients were selected and underwent the SGA, the original CONUT and the new models developed. The diagnostic efficacy of the original CONUT and the new models was tested by means of ROC curves. Both samples 1 and 2 were put together to measure the agreement degree between the original CONUT and SGA, and diagnostic efficacy parameters were calculated. No statistically significant differences were found between sample 1 and 2, regarding age, sex and medical/surgical distribution and undernutrition rates were similar (over 40%). The AUC for the ROC curves were 0.862 for the original CONUT, and 0.839 and 0.874, for model 1 and 2 respectively. The kappa index for the CONUT and SGA was 0.680. The CONUT, with the original scores assigned by the authors is equally good than mathematical models and thus is a valuable tool, highly useful and efficient for the purpose of Clinical Undernutrition screening.
Kinematic Structural Modelling in Bayesian Networks
NASA Astrophysics Data System (ADS)
Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.
2017-04-01
We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.
Experimental study on the stability and failure of individual step-pool
NASA Astrophysics Data System (ADS)
Zhang, Chendi; Xu, Mengzhen; Hassan, Marwan A.; Chartrand, Shawn M.; Wang, Zhaoyin
2018-06-01
Step-pools are one of the most common bedforms in mountain streams, the stability and failure of which play a significant role for riverbed stability and fluvial processes. Given this importance, flume experiments were performed with a manually constructed step-pool model. The experiments were carried out with a constant flow rate to study features of step-pool stability as well as failure mechanisms. The results demonstrate that motion of the keystone grain (KS) caused 90% of the total failure events. The pool reached its maximum depth and either exhibited relative stability for a period before step failure, which was called the stable phase, or the pool collapsed before its full development. The critical scour depth for the pool increased linearly with discharge until the trend was interrupted by step failure. Variability of the stable phase duration ranged by one order of magnitude, whereas variability of pool scour depth was constrained within 50%. Step adjustment was detected in almost all of the runs with step-pool failure and was one or two orders smaller than the diameter of the step stones. Two discharge regimes for step-pool failure were revealed: one regime captures threshold conditions and frames possible step-pool failure, whereas the second regime captures step-pool failure conditions and is the discharge of an exceptional event. In the transitional stage between the two discharge regimes, pool and step adjustment magnitude displayed relatively large variabilities, which resulted in feedbacks that extended the duration of step-pool stability. Step adjustment, which was a type of structural deformation, increased significantly before step failure. As a result, we consider step deformation as the direct explanation to step-pool failure rather than pool scour, which displayed relative stability during step deformations in our experiments.
Uncertainty quantification methodologies development for stress corrosion cracking of canister welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dingreville, Remi Philippe Michel; Bryan, Charles R.
2016-09-30
This letter report presents a probabilistic performance assessment model to evaluate the probability of canister failure (through-wall penetration) by SCC. The model first assesses whether environmental conditions for SCC – the presence of an aqueous film – are present at canister weld locations (where tensile stresses are likely to occur) on the canister surface. Geometry-specific storage system thermal models and weather data sets representative of U.S. spent nuclear fuel (SNF) storage sites are implemented to evaluate location-specific canister surface temperature and relative humidity (RH). As the canister cools and aqueous conditions become possible, the occurrence of corrosion is evaluated. Corrosionmore » is modeled as a two-step process: first, pitting is initiated, and the extent and depth of pitting is a function of the chloride surface load and the environmental conditions (temperature and RH). Second, as corrosion penetration increases, the pit eventually transitions to a SCC crack, with crack initiation becoming more likely with increasing pit depth. Once pits convert to cracks, a crack growth model is implemented. The SCC growth model includes rate dependencies on both temperature and crack tip stress intensity factor, and crack growth only occurs in time steps when aqueous conditions are predicted. The model suggests that SCC is likely to occur over potential SNF interim storage intervals; however, this result is based on many modeling assumptions. Sensitivity analyses provide information on the model assumptions and parameter values that have the greatest impact on predicted storage canister performance, and provide guidance for further research to reduce uncertainties.« less
Effectiveness of en masse versus two-step retraction: a systematic review and meta-analysis.
Rizk, Mumen Z; Mohammed, Hisham; Ismael, Omar; Bearn, David R
2018-01-05
This review aims to compare the effectiveness of en masse and two-step retraction methods during orthodontic space closure regarding anchorage preservation and anterior segment retraction and to assess their effect on the duration of treatment and root resorption. An electronic search for potentially eligible randomized controlled trials and prospective controlled trials was performed in five electronic databases up to July 2017. The process of study selection, data extraction, and quality assessment was performed by two reviewers independently. A narrative review is presented in addition to a quantitative synthesis of the pooled results where possible. The Cochrane risk of bias tool and the Newcastle-Ottawa Scale were used for the methodological quality assessment of the included studies. Eight studies were included in the qualitative synthesis in this review. Four studies were included in the quantitative synthesis. En masse/miniscrew combination showed a statistically significant standard mean difference regarding anchorage preservation - 2.55 mm (95% CI - 2.99 to - 2.11) and the amount of upper incisor retraction - 0.38 mm (95% CI - 0.70 to - 0.06) when compared to a two-step/conventional anchorage combination. Qualitative synthesis suggested that en masse retraction requires less time than two-step retraction with no difference in the amount of root resorption. Both en masse and two-step retraction methods are effective during the space closure phase. The en masse/miniscrew combination is superior to the two-step/conventional anchorage combination with regard to anchorage preservation and amount of retraction. Limited evidence suggests that anchorage reinforcement with a headgear produces similar results with both retraction methods. Limited evidence also suggests that en masse retraction may require less time and that no significant differences exist in the amount of root resorption between the two methods.
Modeling the stepping mechanism in negative lightning leaders
NASA Astrophysics Data System (ADS)
Iudin, Dmitry; Syssoev, Artem; Davydenko, Stanislav; Rakov, Vladimir
2017-04-01
It is well-known that the negative leaders develop in a step manner using a mechanism of the so-called space leaders in contrary to positive ones, which propagate continuously. Despite this fact has been known for about a hundred years till now no one had developed any plausible model explaining this asymmetry. In this study we suggest a model of the stepped development of the negative lightning leader which for the first time allows carrying out the numerical simulation of its evolution. The model is based on the probability approach and description of temporal evolution of the discharge channels. One of the key features of our model is accounting for the presence of so called space streamers/leaders which play a fundamental role in the formation of negative leader's steps. Their appearance becomes possible due to the accounting of potential influence of the space charge injected into the discharge gap by the streamer corona. The model takes into account an asymmetry of properties of negative and positive streamers which is based on well-known from numerous laboratory measurements fact that positive streamers need about twice weaker electric field to appear and propagate as compared to negative ones. An extinction of the conducting channel as a possible way of its evolution is also taken into account. This allows us to describe the leader channel's sheath formation. To verify the morphology and characteristics of the model discharge, we use the results of the high-speed video observations of natural negative stepped leaders. We can conclude that the key properties of the model and natural negative leaders are very similar.
GEM: a dynamic tracking model for mesoscale eddies in the ocean
NASA Astrophysics Data System (ADS)
Li, Qiu-Yang; Sun, Liang; Lin, Sheng-Fu
2016-12-01
The Genealogical Evolution Model (GEM) presented here is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish between different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, the GEM first uses a two-dimensional (2-D) similarity vector (i.e., a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the "missing eddy" problem (temporarily lost eddy in tracking). Second, for tracking when an eddy splits, the GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as the birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O(LM(N + 1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distributions in the North Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". The GEM is useful not only for satellite-based observational data, but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.
A model for evaluating stream temperature response to climate change scenarios in Wisconsin
Westenbroek, Stephen M.; Stewart, Jana S.; Buchwald, Cheryl A.; Mitro, Matthew G.; Lyons, John D.; Greb, Steven
2010-01-01
Global climate change is expected to alter temperature and flow regimes for streams in Wisconsin over the coming decades. Stream temperature will be influenced not only by the predicted increases in average air temperature, but also by changes in baseflow due to changes in precipitation patterns and amounts. In order to evaluate future stream temperature and flow regimes in Wisconsin, we have integrated two existing models in order to generate a water temperature time series at a regional scale for thousands of stream reaches where site-specific temperature observations do not exist. The approach uses the US Geological Survey (USGS) Soil-Water-Balance (SWB) model, along with a recalibrated version of an existing artificial neural network (ANN) stream temperature model. The ANN model simulates stream temperatures on the basis of landscape variables such as land use and soil type, and also includes climate variables such as air temperature and precipitation amounts. The existing ANN model includes a landscape variable called DARCY designed to reflect the potential for groundwater recharge in the contributing area for a stream segment. SWB tracks soil-moisture and potential recharge at a daily time step, providing a way to link changing climate patterns and precipitation amounts over time to baseflow volumes, and presumably to stream temperatures. The recalibrated ANN incorporates SWB-derived estimates of potential recharge to supplement the static estimates of groundwater flow potential derived from a topographically based model (DARCY). SWB and the recalibrated ANN will be supplied with climate drivers from a suite of general circulation models and emissions scenarios, enabling resource managers to evaluate possible changes in stream temperature regimes for Wisconsin.
Study of chromatic adaptation using memory color matches, Part II: colored illuminants.
Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter
2017-04-03
In a previous paper, 12 corresponding color data sets were derived for 4 neutral illuminants using the long-term memory colours of five familiar objects. The data were used to test several linear (one-step and two-step von Kries, RLAB) and nonlinear (Hunt and Nayatani) chromatic adaptation transforms (CAT). This paper extends that study to a total of 156 corresponding color sets by including 9 more colored illuminants: 2 with low and 2 with high correlated color temperatures as well as 5 representing high chroma adaptive conditions. As in the previous study, a two-step von Kries transform whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color set. Most of the transforms tested, except the two- and one-step von Kries models with optimized D, showed large errors for corresponding color subsets that contained non-neutral adaptive conditions as all of them tended to overestimate the effective degree of adaptation in this study. An analysis of the impact of the sensor space primaries in which the adaptation is performed was found to have little impact compared to that of model choice. Finally, the effective degree of adaptation for the 13 illumination conditions (4 neutral + 9 colored) was successfully modelled using a bivariate Gaussian in a Macleod-Boyton like chromaticity diagram.
The decoupling of the glass transitions in the two-component p-spin spherical model
NASA Astrophysics Data System (ADS)
Ikeda, Harukuni; Ikeda, Atsushi
2016-07-01
Binary mixtures of large and small particles with a disparate size ratio exhibit a rich phenomenology at their glass transition points. In order to gain insights on such systems, we introduce and study a two-component version of the p-spin spherical spin glass model. We employ the replica method to calculate the free energy and the phase diagram. We show that when the strengths of the interactions of each component are not widely separated, the model has only one glass phase characterized by the conventional one-step replica symmetry breaking. However when the strengths of the interactions are well separated, the model has three glass phases depending on the temperature and component ratio. One is the ‘single’ glass phase in which only the spins of one component are frozen while the spins of the other component remain mobile. This phase is characterized by the one-step replica symmetry breaking. The second is the ‘double’ glass phase obtained by cooling the single glass phase further, in which the spins of the remaining mobile component are also frozen. This phase is characterized by the two-step replica symmetry breaking. The third is also the ‘double’ glass phase, which, however, is formed by the simultaneous freezing of the spins of both components at the same temperatures and is characterized by the one-step replica symmetry breaking. We discuss the implications of these results for the glass transitions of binary mixtures.
Jena, N R; Mishra, P C
2005-07-28
Mechanisms of formation of the mutagenic product 8-oxoguanine (8OG) due to reactions of guanine with two separate OH* radicals and with H2O2 were investigated at the B3LYP/6-31G, B3LYP/6-311++G, and B3LYP/AUG-cc-pVDZ levels of theory. Single point energy calculations were carried out with the MP2/AUG-cc-pVDZ method employing the optimized geometries at the B3LYP/AUG-cc-pVDZ level. Solvent effect was treated using the PCM and IEF-PCM models. Reactions of two separate OH* radicals and H2O2 with the C2 position of 5-methylimidazole (5MI) were investigated taking 5MI as a model to study reactions at the C8 position of guanine. The addition reaction of an OH* radical at the C8 position of guanine is found to be nearly barrierless while the corresponding adduct is quite stable. The reaction of a second OH* radical at the C8 position of guanine leading to the formation of 8OG complexed with a water molecule can take place according to two different mechanisms, involving two steps each. According to one mechanism, at the first step, 8-hydroxyguanine (8OHG) complexed with a water molecule is formed ,while at the second step, 8OHG is tautomerized to 8OG. In the other mechanism, at the first step, an intermediate complexed (IC) with a water molecule is formed, the five-membered ring of which is open, while at the second step, the five-membered ring is closed and a hydrogen bonded complex of 8OG with a water molecule is formed. The reaction of H2O2 with guanine leading to the formation of 8OG complexed with a water molecule can also take place in accordance with two different mechanisms having two steps each. At the first step of one mechanism, H2O2 is dissociated into two OH* groups that react with guanine to form the same IC as that formed in the reaction with two separate OH* radicals, and the subsequent step of this mechanism is also the same as that of the reaction of guanine with two separate OH* radicals. At the first step of the other mechanism of the reaction of guanine with H2O2, the latter molecule is dissociated into a hydrogen atom and an OOH* group which become bonded to the N7 and C8 atoms of guanine, respectively. At the second step of this mechanism, the OOH* group is dissociated into an oxygen atom and an OH* group, the former becomes bonded to the C8 atom of guanine while the latter abstracts the H8 atom bonded to C8, thus producing 8OG complexed with a water molecule. Solvent effects of the aqueous medium on certain reaction barriers and released energies are appreciable. 5MI works as a satisfactory model for a qualitative study of the reactions of two separate OH* radicals or H2O2 occurring at the C8 position of guanine.
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
2018-01-09
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N
2018-02-13
Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.
Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less
Santarelli, M; Barra, S; Sagnelli, F; Zitella, P
2012-11-01
The paper deals with the energy analysis and optimization of a complete biomass-to-electricity energy pathway, starting from raw biomass towards the production of renewable electricity. The first step (biomass-to-biogas) is based on a real pilot plant located in Environment Park S.p.A. (Torino, Italy) with three main steps ((1) impregnation; (2) steam explosion; (3) enzymatic hydrolysis), completed by a two-step anaerobic fermentation. In the second step (biogas-to-electricity), the paper considers two technologies: internal combustion engines and a stack of solid oxide fuel cells. First, the complete pathway has been modeled and validated through experimental data. After, the model has been used for an analysis and optimization of the complete thermo-chemical and biological process, with the objective function of maximization of the energy balance at minimum consumption. The comparison between ICE and SOFC shows the better performance of the integrated plants based on SOFC. Copyright © 2012 Elsevier Ltd. All rights reserved.
Boulanger, Eliot; Thiel, Walter
2012-11-13
Accurate quantum mechanical/molecular mechanical (QM/MM) treatments should account for MM polarization and properly include long-range electrostatic interactions. We report on a development that covers both these aspects. Our approach combines the classical Drude oscillator (DO) model for the electronic polarizability of the MM atoms with the generalized solvent boundary Potential (GSBP) and the solvated macromolecule boundary potential (SMBP). These boundary potentials (BP) are designed to capture the long-range effects of the outer region of a large system on its interior. They employ a finite difference approximation to the Poisson-Boltzmann equation for computing electrostatic interactions and take into account outer-region bulk solvent through a polarizable dielectric continuum (PDC). This approach thus leads to fully polarizable three-layer QM/MM-DO/BP methods. As the mutual responses of each of the subsystems have to be taken into account, we propose efficient schemes to converge the polarization of each layer simultaneously. For molecular dynamics (MD) simulations using GSBP, this is achieved by considering the MM polarizable model as a dynamical degree of freedom, and hence contributions from the boundary potential can be evaluated for a frozen state of polarization at every time step. For geometry optimizations using SMBP, we propose a dual self-consistent field approach for relaxing the Drude oscillators to their ideal positions and converging the QM wave function with the proper boundary potential. The chosen coupling schemes are evaluated with a test system consisting of a glycine molecule in a water ball. Both boundary potentials are capable of properly reproducing the gradients at the inner-region atoms and the Drude oscillators. We show that the effect of the Drude oscillators must be included in all terms of the boundary potentials to obtain accurate results and that the use of a high dielectric constant for the PDC does not lead to a polarization catastrophe of the DO models. Optimum values for some key parameters are discussed. We also address the efficiency of these approaches compared to standard QM/MM-DO calculations without BP. In the SMBP case, computation times can be reduced by around 40% for each step of a geometry optimization, with some variation depending on the chosen QM method. In the GSBP case, the computational advantages of using the boundary potential increase with system size and with the number of MD steps.
NASA Astrophysics Data System (ADS)
Rao, Mandava Mohana
2017-10-01
Ground resistance of high voltage substations must be as low as possible for safe grounding of their equipment both during normal and fault conditions. However, in gas insulated substations (GIS), even though resistance is low, it does not ensure the step and touch potentials of the grounding system within permissible levels. In the present study, an analytical model has been developed to calculate ground resistance, step and touch potentials of a grounding system used for GIS. Different models have been proposed for the evaluation of number of grounding rods to be inserted in to the ground. The effect of concrete foundations on above performance parameters has been analyzed by considering various fault currents, soil/earth resistivities and number of grounding rods. Finally, design optimization of GIS grounding system has been reported for fault currents in the order of 63 kA located in earth resistivity of 100Ω-m and above.
A one-way shooting algorithm for transition path sampling of asymmetric barriers
NASA Astrophysics Data System (ADS)
Brotzakis, Z. Faidon; Bolhuis, Peter G.
2016-10-01
We present a novel transition path sampling shooting algorithm for the efficient sampling of complex (biomolecular) activated processes with asymmetric free energy barriers. The method employs a fictitious potential that biases the shooting point toward the transition state. The method is similar in spirit to the aimless shooting technique by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)], but is targeted for use with the one-way shooting approach, which has been shown to be more effective than two-way shooting algorithms in systems dominated by diffusive dynamics. We illustrate the method on a 2D Langevin toy model, the association of two peptides and the initial step in dissociation of a β-lactoglobulin dimer. In all cases we show a significant increase in efficiency.
Modeling synchronous voltage source converters in transmission system planning studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosterev, D.N.
1997-04-01
A Voltage Source Converter (VSC) can be beneficial to power utilities in many ways. To evaluate the VSC performance in potential applications, the device has to be represented appropriately in planning studies. This paper addresses VSC modeling for EMTP, powerflow, and transient stability studies. First, the VSC operating principles are overviewed, and the device model for EMTP studies is presented. The ratings of VSC components are discussed, and the device operating characteristics are derived based on these ratings. A powerflow model is presented and various control modes are proposed. A detailed stability model is developed, and its step-by-step initialization proceduremore » is described. A simplified stability model is also derived under stated assumptions. Finally, validation studies are performed to demonstrate performance of developed stability models and to compare it with EMTP simulations.« less
Zuo, Peng; Li, XiuJun; Dominguez, Delfina C; Ye, Bang-Ce
2013-10-07
Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL(-1). We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step 'turn on' pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens.
Zuo, Peng; Dominguez, Delfina C.; Ye, Bang-Ce
2014-01-01
Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL−1. We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step ‘turn on’ pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens. PMID:23929394
Learning Strategy Instruction: Exploring the Potential of Metacognition.
ERIC Educational Resources Information Center
Mayo, Karen E.
1993-01-01
Focuses on one cognitive strategy, metacognition, and describes the success of this strategy with students of varying ages and abilities. Provides a six-step model for implementing strategy instruction in the classroom. (RS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wemhoff, A P; Burnham, A K; Nichols III, A L
The reduction of the number of reactions in kinetic models for both the HMX beta-delta phase transition and thermal cookoff provides an attractive alternative to traditional multi-stage kinetic models due to reduced calibration effort requirements. In this study, we use the LLNL code ALE3D to provide calibrated kinetic parameters for a two-reaction bidirectional beta-delta HMX phase transition model based on Sandia Instrumented Thermal Ignition (SITI) and Scaled Thermal Explosion (STEX) temperature history curves, and a Prout-Tompkins cookoff model based on One-Dimensional Time to Explosion (ODTX) data. Results show that the two-reaction bidirectional beta-delta transition model presented here agrees as wellmore » with STEX and SITI temperature history curves as a reversible four-reaction Arrhenius model, yet requires an order of magnitude less computational effort. In addition, a single-reaction Prout-Tompkins model calibrated to ODTX data provides better agreement with ODTX data than a traditional multi-step Arrhenius model, and can contain up to 90% less chemistry-limited time steps for low-temperature ODTX simulations. Manual calibration methods for the Prout-Tompkins kinetics provide much better agreement with ODTX experimental data than parameters derived from Differential Scanning Calorimetry (DSC) measurements at atmospheric pressure. The predicted surface temperature at explosion for STEX cookoff simulations is a weak function of the cookoff model used, and a reduction of up to 15% of chemistry-limited time steps can be achieved by neglecting the beta-delta transition for this type of simulation. Finally, the inclusion of the beta-delta transition model in the overall kinetics model can affect the predicted time to explosion by 1% for the traditional multi-step Arrhenius approach, while up to 11% using a Prout-Tompkins cookoff model.« less
Rejoining and misrejoining of radiation-induced chromatin breaks. II. Biophysical Model
NASA Technical Reports Server (NTRS)
Wu, H.; Durante, M.; George, K.; Goodwin, E. H.; Yang, T. C.
1996-01-01
A biophysical model for the kinetics of the formation of radiation-induced chromosome aberrations is developed to account for the recent experimental results obtained with a combination of the premature chromosome condensation (PCC) and fluorescence in situ hybridization (FISH) techniques. In this model, we consider the broken ends of DNA double-strand breaks (DSBs) to be reactant and make use of the interaction distance hypothesis. The repair/misrepair process between broken ends is suggested to consist of two steps; the first step represents the two break ends approaching each other, and the second step represents the enzymatic processes leading to DNA end-to-end rejoining. Only the second step is reflected in the kinetics observed in experiments using PCC. The model appears to be able to fit existing data for human cells. It is shown that the kinetics of the formation of chromosome aberrations can be explained by a single rate that characterizes both rejoining and misrejoining of DSBs, suggesting that repair and misrepair share the same mechanism. Fast repair (completed in minutes) in a subset of DSBs is suggested as an explanation of the complete exchanges observed with PCC in human lymphocytes immediately after irradiation. The fast repair component seems to be absent in human fibroblasts.
Tsintou, Magdalini; Dalamagkas, Kyriakos; Makris, Nikos
2016-01-01
Regeneration and repair is the ultimate goal of therapeutics in trauma of the central nervous system (CNS). Stroke and spinal cord injury (SCI) are two highly prevalent CNS disorders that remain incurable, despite numerous research studies and the clinical need for effective treatments. Neural engineering is a diverse biomedical field, that addresses these diseases using new approaches. Research in the field involves principally rodent models and biologically active, biodegradable hydrogels. Promising results have been reported in preclinical studies of CNS repair, demonstrating the great potential for the development of new treatments for the brain, spinal cord and peripheral nerve injury. Several obstacles stand in the way of clinical translation of neuroregeneration research. There seems to be a key gap in the translation of research from rodent models to human applications, namely non-human primate models, which constitute a critical bridging step. Applying injectable therapeutics and multimodal neuroimaging in stroke lesions using experimental rhesus monkey models is an avenue that a few research groups have begun to embark on. Understanding and assessing the changes that the injured brain or spinal cord undergoes after an intervention with biodegradable hydrogels in non-human primates seem to represent critical preclinical research steps. Existing innovative models in non-human primates allow us to evaluate the potential of neural engineering and injectable hydrogels. The results of these preliminary studies will pave the way for translating this research into much needed clinical therapeutic approaches. Cutting edge imaging technology using Connectome scanners represents a tremendous advancement, enabling the in vivo, detailed, high-resolution evaluation of these therapeutic interventions in experimental animals. Most importantly, they also allow quantifiable and clinically meaningful correlations with humans, increasing the translatability of these innovations to the bedside.
NASA Astrophysics Data System (ADS)
Tshipa, M.; Winkoun, D. P.; Nijegorodov, N.; Masale, M.
2018-04-01
Theoretical investigations are carried out of binding energies of a donor charge assumed to be located exactly at the center of symmetry of two concentric cylindrical quantum wires. The intrinsic confinement potential in the region of the inner cylinder is modeled in any one of the three profiles: simple parabolic, shifted parabolic or the polynomial potential. The potential inside the shell is taken to be a potential step or potential barrier of a finite height. Additional confinement of the charge carriers is due to the vector potential of the axial applied magnetic field. It is found that the binding energies attain maxima in their variations with the radius of the inner cylinder irrespective of the particular intrinsic confinement of the inner cylinder. As the radius of the inner cylinder is increased further, the binding energies corresponding to either the parabolic or the polynomial potentials attain minima at some critical core-radius. Finally, as anticipated, the binding energies increase with the increase of the parallel applied magnetic field. This behaviour of the binding energies is irrespective of the particular electric potential of the nanostructure or its specific dimensions.
Heat Transfer on a Flat Plate with Uniform and Step Temperature Distributions
NASA Technical Reports Server (NTRS)
Bahrami, Parviz A.
2005-01-01
Heat transfer associated with turbulent flow on a step-heated or cooled section of a flat plate at zero angle of attack with an insulated starting section was computationally modeled using the GASP Navier-Stokes code. The algebraic eddy viscosity model of Baldwin-Lomax and the turbulent two-equation models, the K- model and the Shear Stress Turbulent model (SST), were employed. The variations from uniformity of the imposed experimental temperature profile were incorporated in the computations. The computations yielded satisfactory agreement with the experimental results for all three models. The Baldwin- Lomax model showed the closest agreement in heat transfer, whereas the SST model was higher and the K-omega model was yet higher than the experiments. In addition to the step temperature distribution case, computations were also carried out for a uniformly heated or cooled plate. The SST model showed the closest agreement with the Von Karman analogy, whereas the K-omega model was higher and the Baldwin-Lomax was lower.
Theoretical study of gas hydrate decomposition kinetics--model development.
Windmeier, Christoph; Oellrich, Lothar R
2013-10-10
In order to provide an estimate of the order of magnitude of intrinsic gas hydrate dissolution and dissociation kinetics, the "Consecutive Desorption and Melting Model" (CDM) is developed by applying only theoretical considerations. The process of gas hydrate decomposition is assumed to comprise two consecutive and repetitive quasi chemical reaction steps. These are desorption of the guest molecule followed by local solid body melting. The individual kinetic steps are modeled according to the "Statistical Rate Theory of Interfacial Transport" and the Wilson-Frenkel approach. All missing required model parameters are directly linked to geometric considerations and a thermodynamic gas hydrate equilibrium model.
Renukaradhya, Gourapura J; Narasimhan, Balaji; Mallapragada, Surya K
2015-12-10
Vaccine development has had a huge impact on human health. However, there is a significant need to develop efficacious vaccines for several existing as well as emerging respiratory infectious diseases. Several challenges need to be overcome to develop efficacious vaccines with translational potential. This review focuses on two aspects to overcome some barriers - 1) the development of nanoparticle-based vaccines, and 2) the choice of suitable animal models for respiratory infectious diseases that will allow for translation. Nanoparticle-based vaccines, including subunit vaccines involving synthetic and/or natural polymeric adjuvants and carriers, as well as those based on virus-like particles offer several key advantages to help overcome the barriers to effective vaccine development. These include the ability to deliver combinations of antigens, target the vaccine formulation to specific immune cells, enable cross-protection against divergent strains, act as adjuvants or immunomodulators, allow for sustained release of antigen, enable single dose delivery, and potentially obviate the cold chain. While mouse models have provided several important insights into the mechanisms of infectious diseases, they are often a limiting step in translation of new vaccines to the clinic. An overview of different animal models involved in vaccine research for respiratory infections, with advantages and disadvantages of each model, is discussed. Taken together, advances in nanotechnology, combined with the right animal models for evaluating vaccine efficacy, has the potential to revolutionize vaccine development for respiratory infections. Copyright © 2015 Elsevier B.V. All rights reserved.
One-step fabrication of multifunctional micromotors.
Gao, Wenlong; Liu, Mei; Liu, Limei; Zhang, Hui; Dong, Bin; Li, Christopher Y
2015-09-07
Although artificial micromotors have undergone tremendous progress in recent years, their fabrication normally requires complex steps or expensive equipment. In this paper, we report a facile one-step method based on an emulsion solvent evaporation process to fabricate multifunctional micromotors. By simultaneously incorporating various components into an oil-in-water droplet, upon emulsification and solidification, a sphere-shaped, asymmetric, and multifunctional micromotor is formed. Some of the attractive functions of this model micromotor include autonomous movement in high ionic strength solution, remote control, enzymatic disassembly and sustained release. This one-step, versatile fabrication method can be easily scaled up and therefore may have great potential in mass production of multifunctional micromotors for a wide range of practical applications.
Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods
Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah
2016-01-01
The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height −0.36 to −0.97%, while diameter was increased by 0.40–0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were −0.73 to −1.21%, −1.34%, and −1.46% for the height and 0.50–0.80%, 1.20%, and −1.30% for the width, respectively. PMID:28096815
Robust model predictive control for multi-step short range spacecraft rendezvous
NASA Astrophysics Data System (ADS)
Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei
2018-07-01
This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
A hybrid skull-stripping algorithm based on adaptive balloon snake models
NASA Astrophysics Data System (ADS)
Liu, Hung-Ting; Sheu, Tony W. H.; Chang, Herng-Hua
2013-02-01
Skull-stripping is one of the most important preprocessing steps in neuroimage analysis. We proposed a hybrid algorithm based on an adaptive balloon snake model to handle this challenging task. The proposed framework consists of two stages: first, the fuzzy possibilistic c-means (FPCM) is used for voxel clustering, which provides a labeled image for the snake contour initialization. In the second stage, the contour is initialized outside the brain surface based on the FPCM result and evolves under the guidance of the balloon snake model, which drives the contour with an adaptive inward normal force to capture the boundary of the brain. The similarity indices indicate that our method outperformed the BSE and BET methods in skull-stripping the MR image volumes in the IBSR data set. Experimental results show the effectiveness of this new scheme and potential applications in a wide variety of skull-stripping applications.
D Data Acquisition Based on Opencv for Close-Range Photogrammetry Applications
NASA Astrophysics Data System (ADS)
Jurjević, L.; Gašparović, M.
2017-05-01
Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
NASA Astrophysics Data System (ADS)
Janardhanan, Vinod M.; Deutschmann, Olaf
Direct internal reforming in solid oxide fuel cell (SOFC) results in increased overall efficiency of the system. Present study focus on the chemical and electrochemical process in an internally reforming anode supported SOFC button cell running on humidified CH 4 (3% H 2 O). The computational approach employs a detailed multi-step model for heterogeneous chemistry in the anode, modified Butler-Volmer formalism for the electrochemistry and Dusty Gas Model (DGM) for the porous media transport. Two-dimensional elliptic model equations are solved for a button cell configuration. The electrochemical model assumes hydrogen as the only electrochemically active species. The predicted cell performances are compared with experimental reports. The results show that model predictions are in good agreement with experimental observation except the open circuit potentials. Furthermore, the steam content in the anode feed stream is found to have remarkable effect on the resulting overpotential losses and surface coverages of various species at the three-phase boundary.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.
Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik
2015-12-01
The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.
NASA Astrophysics Data System (ADS)
Yao, Jianzhuang; Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo
2016-02-01
Extensive computational modeling and simulations have been carried out, in the present study, to uncover the fundamental reaction pathway for butyrylcholinesterase (BChE)-catalyzed hydrolysis of ghrelin, demonstrating that the acylation process of BChE-catalyzed hydrolysis of ghrelin follows an unprecedented single-step reaction pathway and the single-step acylation process is rate-determining. The free energy barrier (18.8 kcal/mol) calculated for the rate-determining step is reasonably close to the experimentally-derived free energy barrier (~19.4 kcal/mol), suggesting that the obtained mechanistic insights are reasonable. The single-step reaction pathway for the acylation is remarkably different from the well-known two-step acylation reaction pathway for numerous ester hydrolysis reactions catalyzed by a serine esterase. This is the first time demonstrating that a single-step reaction pathway is possible for an ester hydrolysis reaction catalyzed by a serine esterase and, therefore, one no longer can simply assume that the acylation process must follow the well-known two-step reaction pathway.
Smith, Lee; Sawyer, Alexia; Gardner, Benjamin; Seppala, Katri; Ucci, Marcella; Marmot, Alexi; Lally, Pippa; Fisher, Abi
2018-06-09
Habitual behaviours are learned responses that are triggered automatically by associated environmental cues. The unvarying nature of most workplace settings makes workplace physical activity a prime candidate for a habitual behaviour, yet the role of habit strength in occupational physical activity has not been investigated. Aims of the present study were to: (i) document occupational physical activity habit strength; and (ii) investigate associations between occupational activity habit strength and occupational physical activity levels. A sample of UK office-based workers ( n = 116; 53% female, median age 40 years, SD 10.52) was fitted with activPAL accelerometers worn for 24 h on five consecutive days, providing an objective measure of occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. A self-report index measured the automaticity of two occupational physical activities (“being active” (e.g., walking to printers and coffee machines) and “stair climbing”). Adjusted linear regression models investigated the association between occupational activity habit strength and objectively-measured occupational step counts, stepping time, sitting time, standing time and sit-to-stand transitions. Eighty-one per cent of the sample reported habits for “being active”, and 62% reported habits for “stair climbing”. In adjusted models, reported habit strength for “being active” were positively associated with average occupational sit-to-stand transitions per hour (B = 0.340, 95% CI: 0.053 to 0.627, p = 0.021). “Stair climbing” habit strength was unexpectedly negatively associated with average hourly stepping time (B = −0.01, 95% CI: −0.01 to −0.00, p = 0.006) and average hourly occupational step count (B = −38.34, 95% CI: −72.81 to −3.88, p = 0.030), which may reflect that people with stronger stair-climbing habits compensate by walking fewer steps overall. Results suggest that stair-climbing and office-based occupational activity can be habitual. Interventions might fruitfully promote habitual workplace activity, although, in light of potential compensation effects, such interventions should perhaps focus on promoting moderate-intensity activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
NASA Astrophysics Data System (ADS)
Mankoč Borštnik, Norma Susana
2017-05-01
More than 40 years ago the standard model made a successful new step in understanding properties of fermion and boson fields. Now the next step is needed, which would explain what the standard model and the cosmological models just assume: a. The origin of quantum numbers of massless one family members. b. The origin of families. c. The origin of the vector gauge fields. d. The origin of the Higgses and Yukawa couplings. e. The origin of the dark matter. f. The origin of the matter-antimatter asymmetry. g. The origin of the dark energy. h. And several other open problems. The spin-charge-family theory, a kind of the Kaluza-Klein theories in (d = (2n - 1) + 1)-space-time, with d = (13 + 1) and the two kinds of the spin connection fields, which are the gauge fields of the two kinds of the Clifford algebra objects anti-commuting with one another, may provide this much needed next step. The talk presents: i. A short presentation of this theory. ii. The review over the achievements of this theory so far, with some not published yet achievements included. iii. Predictions for future experiments.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
Lu, Aitao; Yang, Ling; Yu, Yanping; Zhang, Meichao; Shao, Yulan; Zhang, Honghong
2014-08-01
The present study used the event-related potential technique to investigate the nature of linguistic effect on color perception. Four types of stimuli based on hue differences between a target color and a preceding color were used: zero hue step within-category color (0-WC); one hue step within-category color (1-WC); one hue step between-category color (1-BC); and two hue step between-category color (2-BC). The ERP results showed no significant effect of stimulus type in the 100-200 ms time window. However, in the 200-350 ms time window, ERP responses to 1-WC target color overlapped with that to 0-WC target color for right visual field (RVF) but not left visual field (LVF) presentation. For the 1-BC condition, ERP amplitudes were comparable in the two visual fields, both being significantly different from the 0-WC condition. The 2-BC condition showed the same pattern as the 1-BC condition. These results suggest that the categorical perception of color in RVF is due to linguistic suppression on within-category color discrimination but not between-category color enhancement, and that the effect is independent of early perceptual processes. © 2014 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Velmurugu, Yogambigai; Vivas, Paula; Connolly, Mitchell; Kuznetsov, Serguei V; Rice, Phoebe A; Ansari, Anjum
2018-02-28
The dynamics and mechanism of how site-specific DNA-bending proteins initially interrogate potential binding sites prior to recognition have remained elusive for most systems. Here we present these dynamics for Integration Host factor (IHF), a nucleoid-associated architectural protein, using a μs-resolved T-jump approach. Our studies show two distinct DNA-bending steps during site recognition by IHF. While the faster (∼100 μs) step is unaffected by changes in DNA or protein sequence that alter affinity by >100-fold, the slower (1-10 ms) step is accelerated ∼5-fold when mismatches are introduced at DNA sites that are sharply kinked in the specific complex. The amplitudes of the fast phase increase when the specific complex is destabilized and decrease with increasing [salt], which increases specificity. Taken together, these results indicate that the fast phase is non-specific DNA bending while the slow phase, which responds only to changes in DNA flexibility at the kink sites, is specific DNA kinking during site recognition. Notably, the timescales for the fast phase overlap with one-dimensional diffusion times measured for several proteins on DNA, suggesting that these dynamics reflect partial DNA bending during interrogation of potential binding sites by IHF as it scans DNA.
A basket two-part model to analyze medical expenditure on interdependent multiple sectors.
Sugawara, Shinya; Wu, Tianyi; Yamanishi, Kenji
2018-05-01
This study proposes a novel statistical methodology to analyze expenditure on multiple medical sectors using consumer data. Conventionally, medical expenditure has been analyzed by two-part models, which separately consider purchase decision and amount of expenditure. We extend the traditional two-part models by adding the step of basket analysis for dimension reduction. This new step enables us to analyze complicated interdependence between multiple sectors without an identification problem. As an empirical application for the proposed method, we analyze data of 13 medical sectors from the Medical Expenditure Panel Survey. In comparison with the results of previous studies that analyzed the multiple sector independently, our method provides more detailed implications of the impacts of individual socioeconomic status on the composition of joint purchases from multiple medical sectors; our method has a better prediction performance.
Searching regional rainfall homogeneity using atmospheric fields
NASA Astrophysics Data System (ADS)
Gabriele, Salvatore; Chiaravalloti, Francesco
2013-03-01
The correct identification of homogeneous areas in regional rainfall frequency analysis is fundamental to ensure the best selection of the probability distribution and the regional model which produce low bias and low root mean square error of quantiles estimation. In an attempt at rainfall spatial homogeneity, the paper explores a new approach that is based on meteo-climatic information. The results are verified ex-post using standard homogeneity tests applied to the annual maximum daily rainfall series. The first step of the proposed procedure selects two different types of homogeneous large regions: convective macro-regions, which contain high values of the Convective Available Potential Energy index, normally associated with convective rainfall events, and stratiform macro-regions, which are characterized by low values of the Q vector Divergence index, associated with dynamic instability and stratiform precipitation. These macro-regions are identified using Hot Spot Analysis to emphasize clusters of extreme values of the indexes. In the second step, inside each identified macro-region, homogeneous sub-regions are found using kriging interpolation on the mean direction of the Vertically Integrated Moisture Flux. To check the proposed procedure, two detailed examples of homogeneous sub-regions are examined.
Accuracy of energy measurement and reversible operation of a microcanonical Szilard engine.
Bergli, Joakim
2014-04-01
In a recent paper [Vaikuntanathan and Jarzynski, Phys. Rev. E 83, 061120 (2011)], a model was introduced whereby work could be extracted from a thermal bath by measuring the energy of a particle that was thermalized by the bath and manipulating the potential of the particle in the appropriate way, depending on the measurement outcome. If the extracted work is Wextracted and the work Werasure needed to be dissipated in order to erase the measured information in accordance with Landauer's principle, it was shown that Wextracted≤Werasure in accordance with the second law of thermodynamics. Here we extend this work in two directions: First, we discuss how accurately the energy should be measured. By increasing the accuracy one can extract more work, but at the same time one obtains more information that has to be deleted. We discuss what are the appropriate ways of optimizing the balance between the two and find optimal solutions. Second, whenever Wextracted is strictly less than Werasure it means that an irreversible step has been performed. We identify the irreversible step and propose a protocol that will achieve the same transition in a reversible way, increasing Wextracted so that Wextracted=Werasure.
Shin, Jin-Ha; Yun, Sook Young; Lee, Chang Hyoung; Park, Hwa-Sun; Suh, Su-Jeong
2015-11-01
Anodization of aluminum is generally divided up into two types of anodic aluminum oxide structures depending on electrolyte type. In this study, an anodization process was carried out in two steps to obtain high dielectric strength and break down voltage. In the first step, evaporated high purity Al on Si wafer was anodized in oxalic acidic aqueous solution at various times at a constant temperature of 5 degrees C. In the second step, citric acidic aqueous solution was used to obtain a thickly grown sub-barrier layer. During the second anodization process, the anodizing potential of various ranges was applied at room temperature. An increased thickness of the sub-barrier layer in the porous matrix was obtained according to the increment of the applied anodizing potential. The microstructures and the growth of the sub-barrier layer were then observed with an increasing anodizing potential of 40 to 300 V by using a scanning electron microscope (SEM). An impedance analyzer was used to observe the change of electrical properties, including the capacitance, dissipation factor, impedance, and equivalent series resistance (ESR) depending on the thickness increase of the sub-barrier layer. In addition, the breakdown voltage was measured. The results revealed that dielectric strength was improved with the increase of sub-barrier layer thickness.
Comparing the efficacy of metronome beeps and stepping stones to adjust gait: steps to follow!
Bank, Paulina J M; Roerdink, Melvyn; Peper, C E
2011-03-01
Acoustic metronomes and visual targets have been used in rehabilitation practice to improve pathological gait. In addition, they may be instrumental in evaluating and training instantaneous gait adjustments. The aim of this study was to compare the efficacy of two cue types in inducing gait adjustments, viz. acoustic temporal cues in the form of metronome beeps and visual spatial cues in the form of projected stepping stones. Twenty healthy elderly (aged 63.2 ± 3.6 years) were recruited to walk on an instrumented treadmill at preferred speed and cadence, paced by either metronome beeps or projected stepping stones. Gait adaptations were induced using two manipulations: by perturbing the sequence of cues and by imposing switches from one cueing type to the other. Responses to these manipulations were quantified in terms of step-length and step-time adjustments, the percentage correction achieved over subsequent steps, and the number of steps required to restore the relation between gait and the beeps or stepping stones. The results showed that perturbations in a sequence of stepping stones were overcome faster than those in a sequence of metronome beeps. In switching trials, switching from metronome beeps to stepping stones was achieved faster than vice versa, indicating that gait was influenced more strongly by the stepping stones than the metronome beeps. Together these results revealed that, in healthy elderly, the stepping stones induced gait adjustments more effectively than did the metronome beeps. Potential implications for the use of metronome beeps and stepping stones in gait rehabilitation practice are discussed.
Predicting severe injury using vehicle telemetry data.
Ayoung-Chee, Patricia; Mack, Christopher D; Kaufman, Robert; Bulger, Eileen
2013-01-01
In 2010, the National Highway Traffic Safety Administration standardized collision data collected by event data recorders, which may help determine appropriate emergency medical service (EMS) response. Previous models (e.g., General Motors ) predict severe injury (Injury Severity Score [ISS] > 15) using occupant demographics and collision data. Occupant information is not automatically available, and 12% of calls from advanced automatic collision notification providers are unanswered. To better inform EMS triage, our goal was to create a predictive model only using vehicle collision data. Using the National Automotive Sampling System Crashworthiness Data System data set, we included front-seat occupants in late-model vehicles (2000 and later) in nonrollover and rollover crashes in years 2000 to 2010. Telematic (change in velocity, direction of force, seat belt use, vehicle type and curb weight, as well as multiple impact) and nontelematic variables (maximum intrusion, narrow impact, and passenger ejection) were included. Missing data were multiply imputed. The University of Washington model was tested to predict severe injury before application of guidelines (Step 0) and for occupants who did not meet Steps 1 and 2 criteria (Step 3) of the Centers for Disease Control and Prevention Field Triage Guidelines. A probability threshold of 20% was chosen in accordance with Centers for Disease Control and Prevention recommendations. There were 28,633 crashes, involving 33,956 vehicles and 52,033 occupants, of whom 9.9% had severe injury. At Step 0, the University of Washington model sensitivity was 40.0% and positive predictive value (PPV) was 20.7%. At Step 3, the sensitivity was 32.3 % and PPV was 10.1%. Model analysis excluding nontelematic variables decreased sensitivity and PPV. The sensitivity of the re-created General Motors model was 38.5% at Step 0 and 28.1% at Step 3. We designed a model using only vehicle collision data that was predictive of severe injury at collision notification and in the field and was comparable with an existing model. These models demonstrate the potential use of advanced automatic collision notification in planning EMS response. Prognostic study, level II.
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
NASA Astrophysics Data System (ADS)
Aleardi, Mattia
2018-01-01
We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.
Generating Poetry Title Based on Semantic Relevance with Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Li, Z.; Niu, K.; He, Z. Q.
2017-09-01
Several approaches have been proposed to automatically generate Chinese classical poetry (CCP) in the past few years, but automatically generating the title of CCP is still a difficult problem. The difficulties are mainly reflected in two aspects. First, the words used in CCP are very different from modern Chinese words and there are no valid word segmentation tools. Second, the semantic relevance of characters in CCP not only exists in one sentence but also exists between the same positions of adjacent sentences, which is hard to grasp by the traditional text summarization models. In this paper, we propose an encoder-decoder model for generating the title of CCP. Our model encoder is a convolutional neural network (CNN) with two kinds of filters. To capture the commonly used words in one sentence, one kind of filters covers two characters horizontally at each step. The other covers two characters vertically at each step and can grasp the semantic relevance of characters between adjacent sentences. Experimental results show that our model is better than several other related models and can capture the semantic relevance of CCP more accurately.
Traffic & safety statewide model and GIS modeling.
DOT National Transportation Integrated Search
2012-07-01
Several steps have been taken over the past two years to advance the Utah Department of Transportation (UDOT) safety initiative. Previous research projects began the development of a hierarchical Bayesian model to analyze crashes on Utah roadways. De...
NASA Astrophysics Data System (ADS)
Overduin, James; Everitt, Francis; Worden, Paul; Mester, John
2012-09-01
The Satellite Test of the Equivalence Principle (STEP) will advance experimental limits on violations of Einstein's equivalence principle from their present sensitivity of two parts in 1013 to one part in 1018 through multiple comparison of the motions of four pairs of test masses of different compositions in a drag-free earth-orbiting satellite. We describe the experiment, its current status and its potential implications for fundamental physics. Equivalence is at the heart of general relativity, our governing theory of gravity and violations are expected in most attempts to unify this theory with the other fundamental interactions of physics, as well as in many theoretical explanations for the phenomenon of dark energy in cosmology. Detection of such a violation would be equivalent to the discovery of a new force of nature. A null result would be almost as profound, pushing upper limits on any coupling between standard-model fields and the new light degrees of freedom generically predicted by these theories down to unnaturally small levels.
Atomistics of vapour–liquid–solid nanowire growth
Wang, Hailong; Zepeda-Ruiz, Luis A.; Gilmer, George H.; Upmanyu, Moneesh
2013-01-01
Vapour–liquid–solid route and its variants are routinely used for scalable synthesis of semiconducting nanowires, yet the fundamental growth processes remain unknown. Here we employ atomic-scale computations based on model potentials to study the stability and growth of gold-catalysed silicon nanowires. Equilibrium studies uncover segregation at the solid-like surface of the catalyst particle, a liquid AuSi droplet, and a silicon-rich droplet–nanowire interface enveloped by heterogeneous truncating facets. Supersaturation of the droplets leads to rapid one-dimensional growth on the truncating facets and much slower nucleation-controlled two-dimensional growth on the main facet. Surface diffusion is suppressed and the excess Si flux occurs through the droplet bulk which, together with the Si-rich interface and contact line, lowers the nucleation barrier on the main facet. The ensuing step flow is modified by Au diffusion away from the step edges. Our study highlights key interfacial characteristics for morphological and compositional control of semiconducting nanowire arrays. PMID:23752586
Predicting tool life in turning operations using neural networks and image processing
NASA Astrophysics Data System (ADS)
Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.
2018-05-01
A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.
Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696
Conformational sampling in template-free protein loop structure modeling: an overview.
Li, Yaohang
2013-01-01
Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.
O’Malley, Denalee; Hudson, Shawna V.; Nekhlyudov, Larissa; Howard, Jenna; Rubinstein, Ellen; Lee, Heather S.; Overholser, Linda S.; Shaw, Amy; Givens, Sarah; Burton, Jay S.; Grunfeld, Eva; Parry, Carly; Crabtree, Benjamin F.
2016-01-01
PURPOSE This study describes the experiences of early implementers of primary care-focused cancer survivorship delivery models. METHODS Snowball sampling was used to identify innovators. Twelve participants (five cancer survivorship primary care innovators and seven content experts) attended a working conference focused on cancer survivorship population strategies and primary care transformation. Data included meeting discussion transcripts/field notes, transcribed in-depth innovator interviews, and innovators’ summaries of care models. We used a multi-step immersion/crystallization analytic approach, guided by a primary care organizational change model. RESULTS Innovative practice models included: 1) a consultative model in a primary care setting; 2) a primary care physician (PCP)-led, blended consultative/panel-based model in an oncology setting; 3) an oncology nurse navigator in a primary care practice; and 4) two sub-specialty models where PCPs in a general medical practice dedicated part of their patient panel to cancer survivors. Implementation challenges included: (1) lack of key stakeholder buy-in; (2) practice resources allocated to competing (non-survivorship) change efforts; and (3) competition with higher priority initiatives incentivized by payers. CONCLUSIONS Cancer survivorship delivery models are potentially feasible in primary care; however, significant barriers to widespread implementation exist. Implementation efforts would benefit from increasing the awareness and potential value-add of primary care-focused strategies to address survivors’ needs. PMID:27277895
NASA Technical Reports Server (NTRS)
Keppenne, Christian; Vernieres, Guillaume; Rienecker, Michele; Jacob, Jossy; Kovach, Robin
2011-01-01
Satellite altimetry measurements have provided global, evenly distributed observations of the ocean surface since 1993. However, the difficulties introduced by the presence of model biases and the requirement that data assimilation systems extrapolate the sea surface height (SSH) information to the subsurface in order to estimate the temperature, salinity and currents make it difficult to optimally exploit these measurements. This talk investigates the potential of the altimetry data assimilation once the biases are accounted for with an ad hoc bias estimation scheme. Either steady-state or state-dependent multivariate background-error covariances from an ensemble of model integrations are used to address the problem of extrapolating the information to the sub-surface. The GMAO ocean data assimilation system applied to an ensemble of coupled model instances using the GEOS-5 AGCM coupled to MOM4 is used in the investigation. To model the background error covariances, the system relies on a hybrid ensemble approach in which a small number of dynamically evolved model trajectories is augmented on the one hand with past instances of the state vector along each trajectory and, on the other, with a steady state ensemble of error estimates from a time series of short-term model forecasts. A state-dependent adaptive error-covariance localization and inflation algorithm controls how the SSH information is extrapolated to the sub-surface. A two-step predictor corrector approach is used to assimilate future information. Independent (not-assimilated) temperature and salinity observations from Argo floats are used to validate the assimilation. A two-step projection method in which the system first calculates a SSH increment and then projects this increment vertically onto the temperature, salt and current fields is found to be most effective in reconstructing the sub-surface information. The performance of the system in reconstructing the sub-surface fields is particularly impressive for temperature, but not as satisfactory for salt.
NASA Astrophysics Data System (ADS)
Lee, K.; Chung, E.; Park, K.
2007-12-01
Many urbanized watersheds suffer from streamflow depletion and poor stream quality, which often negatively affects related factors such as in-stream and near-stream ecologic integrity and water supply. But any watershed management which does not consider all potential risks is not proper since all hydrological components are closely related. Therefore this study has developed and applied a ten-step integrated watershed management (IWM) procedure to sustainably rehabilitate distorted hydrologic cycles due to urbanization. Step 1 of this procedure is understanding the watershed component and processes. This study proposes not only water quantity/quality monitoring but also continuous water quantity/quality simulation and estimation of annual pollutant loads from unit loads of all landuses. Step 2 is quantifying the watershed problem as potential flood damage (PFD), potential streamflow depletion (PSD), potential water quality deterioration (PWQD) and watershed evaluation index (WEI). All indicators are selected from the sustainability concept, Pressure-State- Response (PSR) model. All weights are estimated by Analytic Hierarchy Process (AHP). Four indices are calculated using composite programming, a kind of multicritera decision making technque. In Step 3 residents' preference on management objectives which consists of flood damage mitigation, prevention of streamflow depletion, and water quality enhancement are quantified. WEI can be recalculated using these values. Step 4 requires one to set the specific goals and objectives based on the results from Step 2 and 3. Objectives can include spatial flood allocation, instreamflow requirement and total maximum daily load (TMDL). Step 5 and 6 are developing all possible alternatives and to eliminate the infeasible. Step 7 is analyzing the effectiveness of all remaining feasible alternatives. The criteria of water quantity are presented as changed lowflow(Q275) and drought flow(Q355) of flow duration curve and number of days to satisfy the instreamflow requirement. Also the criteria of water quality are proposed as changed average BOD concentration and total daily loads and number of days to satisfy the TMDL. Step 8 involves the calculation of AEI using various MCDM techniques. The indicators of AEI are obtained by the sustainability concept, Drivers-Pressure-State-Impact-Response (DPSIR), an improved PSR model. All previous results are used in this step. Step 9 is estimating the benefit and cost of alternatives. Discrete Willingness To Pay (WTP) for the specific improvement of some current watershed conditions are estimated by the choice experiment method which is an economic valuation with stated presence techniques. WTPs of specific alternatives are calculated by combining AEI and choice experiment results. Therefore, the benefit of alternatives can be obtained by multiplying WTP and total household value of the sub-watershed. Finally in Step 10 the final alternatives comparing the net benefit and BC ratio are determined. Final alternatives derived from the proposed IWM procedure should not be carried out immediately but be discussed by stakeholders and decision makers. However, since plans obtained from the elaborated analyses reflect even sustainability concept, these alternatives can be apt to be accepted comparatively. This ten-step procedure will be helpful in making decision support system for sustainable IWM.
Deep generative learning for automated EHR diagnosis of traditional Chinese medicine.
Liang, Zhaohui; Liu, Jun; Ou, Aihua; Zhang, Honglai; Li, Ziping; Huang, Jimmy Xiangji
2018-05-04
Computer-aided medical decision-making (CAMDM) is the method to utilize massive EMR data as both empirical and evidence support for the decision procedure of healthcare activities. Well-developed information infrastructure, such as hospital information systems and disease surveillance systems, provides abundant data for CAMDM. However, the complexity of EMR data with abstract medical knowledge makes the conventional model incompetent for the analysis. Thus a deep belief networks (DBN) based model is proposed to simulate the information analysis and decision-making procedure in medical practice. The purpose of this paper is to evaluate a deep learning architecture as an effective solution for CAMDM. A two-step model is applied in our study. At the first step, an optimized seven-layer deep belief network (DBN) is applied as an unsupervised learning algorithm to perform model training to acquire feature representation. Then a support vector machine model is adopted to DBN at the second step of the supervised learning. There are two data sets used in the experiments. One is a plain text data set indexed by medical experts. The other is a structured dataset on primary hypertension. The data are randomly divided to generate the training set for the unsupervised learning and the testing set for the supervised learning. The model performance is evaluated by the statistics of mean and variance, the average precision and coverage on the data sets. Two conventional shallow models (support vector machine / SVM and decision tree / DT) are applied as the comparisons to show the superiority of our proposed approach. The deep learning (DBN + SVM) model outperforms simple SVM and DT on two data sets in terms of all the evaluation measures, which confirms our motivation that the deep model is good at capturing the key features with less dependence when the index is built up by manpower. Our study shows the two-step deep learning model achieves high performance for medical information retrieval over the conventional shallow models. It is able to capture the features of both plain text and the highly-structured database of EMR data. The performance of the deep model is superior to the conventional shallow learning models such as SVM and DT. It is an appropriate knowledge-learning model for information retrieval of EMR system. Therefore, deep learning provides a good solution to improve the performance of CAMDM systems. Copyright © 2018. Published by Elsevier B.V.
The Ames two-dimensional stratosphere-mesospheric model. [chemistry and transport of SST pollution
NASA Technical Reports Server (NTRS)
Whitten, R. C.; Borucki, W. J.; Watson, V. R.; Capone, L. A.; Maples, A. L.; Riegel, C. A.
1974-01-01
A two-dimensional model of the stratosphere and mesosphere has recently been developed at Ames Research Center. The model contains chemistry based on 18 species that are solved for at each step and a seasonally-varying transport model based on both winds and eddy transport. The model is described and a preliminary assessment of the impact of supersonic aircraft flights on the ozone layer is given.
An automatic segmentation method of a parameter-adaptive PCNN for medical images.
Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide
2017-09-01
Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.
Wildhaber, Mark L.; Dey, Rima; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.
2015-01-01
In managing fish populations, especially at-risk species, realistic mathematical models are needed to help predict population response to potential management actions in the context of environmental conditions and changing climate while effectively incorporating the stochastic nature of real world conditions. We provide a key component of such a model for the endangered pallid sturgeon (Scaphirhynchus albus) in the form of an individual-based bioenergetics model influenced not only by temperature but also by flow. This component is based on modification of a known individual-based bioenergetics model through incorporation of: the observed ontogenetic shift in pallid sturgeon diet from marcroinvertebrates to fish; the energetic costs of swimming under flowing-water conditions; and stochasticity. We provide an assessment of how differences in environmental conditions could potentially alter pallid sturgeon growth estimates, using observed temperature and velocity from channelized portions of the Lower Missouri River mainstem. We do this using separate relationships between the proportion of maximum consumption and fork length and swimming cost standard error estimates for fish captured above and below the Kansas River in the Lower Missouri River. Critical to our matching observed growth in the field with predicted growth based on observed environmental conditions was a two-step shift in diet from macroinvertebrates to fish.
Exploring patient satisfaction predictors in relation to a theoretical model.
Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil
2013-01-01
The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.
Exploratory Development of Corrosion Inhibiting Primers
1977-07-01
Phenolic Hardener From previous studies, phenol formaldehyde resins of the novolac (two-step) type have given superior properties when used to cure epoxy...novolacs and three resole (one-step) type phenol- formaldehyde resins which also perform as epoxide curing agents. First, Model #1, as de;crihed in Section...results. Varcum 4326 resin was chosen at this stage for further use with the model systems. It is a low molecular weight phenol- formaldehyde resin used
Mediation analysis of alcohol consumption, DNA methylation, and epithelial ovarian cancer.
Wu, Dongyan; Yang, Haitao; Winham, Stacey J; Natanzon, Yanina; Koestler, Devin C; Luo, Tiane; Fridley, Brooke L; Goode, Ellen L; Zhang, Yanbo; Cui, Yuehua
2018-03-01
Epigenetic factors and consumption of alcohol, which suppresses DNA methylation, may influence the development and progression of epithelial ovarian cancer (EOC). However, there is a lack of understanding whether these factors interact to affect the EOC risk. In this study, we aimed to gain insight into this relationship by identifying leukocyte-derived DNA methylation markers acting as potential mediators of alcohol-associated EOC. We implemented a causal inference test (CIT) and the VanderWeele and Vansteelandt multiple mediator model to examine CpG sites that mediate the association between alcohol consumption and EOC risk. We modified one step of the CIT by adopting a high-dimensional inference procedure. The data were based on 196 cases and 202 age-matched controls from the Mayo Clinic Ovarian Cancer Case-Control Study. Implementation of the CIT test revealed two CpG sites (cg09358725, cg11016563), which represent potential mediators of the relationship between alcohol consumption and EOC case-control status. Implementation of the VanderWeele and Vansteelandt multiple mediator model further revealed that these two CpGs were the key mediators. Decreased methylation at both CpGs was more common in cases who drank alcohol at the time of enrollment vs. those who did not. cg11016563 resides in TRPC6 which has been previously shown to be overexpressed in EOC. These findings suggest two CpGs may serve as novel biomarkers for EOC susceptibility.
Changing Instructional Practices through Technology Training, Part 2 of 2.
ERIC Educational Resources Information Center
Seamon, Mary
2001-01-01
This second of a two-part article introducing the steps in a school district's teacher professional development model discusses steps three through six: Web page or project; Internet Discovery (with its five phases-question, search, interpretation, composition, sharing); Cyberinquiry; and WebQuests. Three examples are included: Web Page…
Numerical modelling of electrochemical polarization around charged metallic particles
NASA Astrophysics Data System (ADS)
Bücker, Matthias; Undorf, Sabine; Flores Orozco, Adrián; Kemna, Andreas
2017-04-01
We extend an existing analytical model and carry out numerical simulations to study the polarization process around charged metallic particles immersed in an electrolyte solution. Electro-migration and diffusion processes in the electrolyte are described by the Poisson-Nernst-Planck system of partial differential equations. To model the surface charge density, we consider a time- and frequency-invariant electric potential at the particle surface, which leads to the build-up of a static electrical double layer (EDL). Upon excitation by an external electric field at low frequencies, we observe the superposition of two polarization processes. On the one hand, the induced dipole moment on the metallic particle leads to the accumulation of opposite charges in the electrolyte. This charge polarization corresponds to the long-known response of uncharged metallic particles. On the other hand, the unequal cation and anion concentrations in the EDL give rise to a salinity gradient between the two opposite sides of the metallic particle. The resulting concentration polarization enhances the magnitude of the overall polarization response. Furthermore, we use our numerical model to study the effect of relevant model parameters such as surface charge density and ionic strength of the electrolyte on the resulting spectra of the effective conductivity of the composite model system. Our results do not only give interesting new insight into the time-harmonic variation of electric potential and ion concentrations around charged metallic particle. They are also able to reduce incongruities between earlier model predictions and geophysical field and laboratory measurements. Our model thereby improves the general understanding of IP signatures of metallic particles and represents the next step towards a quantitative interpretation of IP imaging results. Part of this research is funded by the Austrian Federal Ministry of Science, Research and Economy under the Raw Materials Initiative.
Spike avalanches in vivo suggest a driven, slightly subcritical brain state
Priesemann, Viola; Wibral, Michael; Valderrama, Mario; Pröpper, Robert; Le Van Quyen, Michel; Geisel, Theo; Triesch, Jochen; Nikolić, Danko; Munk, Matthias H. J.
2014-01-01
In self-organized critical (SOC) systems avalanche size distributions follow power-laws. Power-laws have also been observed for neural activity, and so it has been proposed that SOC underlies brain organization as well. Surprisingly, for spiking activity in vivo, evidence for SOC is still lacking. Therefore, we analyzed highly parallel spike recordings from awake rats and monkeys, anesthetized cats, and also local field potentials from humans. We compared these to spiking activity from two established critical models: the Bak-Tang-Wiesenfeld model, and a stochastic branching model. We found fundamental differences between the neural and the model activity. These differences could be overcome for both models through a combination of three modifications: (1) subsampling, (2) increasing the input to the model (this way eliminating the separation of time scales, which is fundamental to SOC and its avalanche definition), and (3) making the model slightly sub-critical. The match between the neural activity and the modified models held not only for the classical avalanche size distributions and estimated branching parameters, but also for two novel measures (mean avalanche size, and frequency of single spikes), and for the dependence of all these measures on the temporal bin size. Our results suggest that neural activity in vivo shows a mélange of avalanches, and not temporally separated ones, and that their global activity propagation can be approximated by the principle that one spike on average triggers a little less than one spike in the next step. This implies that neural activity does not reflect a SOC state but a slightly sub-critical regime without a separation of time scales. Potential advantages of this regime may be faster information processing, and a safety margin from super-criticality, which has been linked to epilepsy. PMID:25009473
A computational kinetic model of diffusion for molecular systems.
Teo, Ivan; Schulten, Klaus
2013-09-28
Regulation of biomolecular transport in cells involves intra-protein steps like gating and passage through channels, but these steps are preceded by extra-protein steps, namely, diffusive approach and admittance of solutes. The extra-protein steps develop over a 10-100 nm length scale typically in a highly particular environment, characterized through the protein's geometry, surrounding electrostatic field, and location. In order to account for solute energetics and mobility of solutes in this environment at a relevant resolution, we propose a particle-based kinetic model of diffusion based on a Markov State Model framework. Prerequisite input data consist of diffusion coefficient and potential of mean force maps generated from extensive molecular dynamics simulations of proteins and their environment that sample multi-nanosecond durations. The suggested diffusion model can describe transport processes beyond microsecond duration, relevant for biological function and beyond the realm of molecular dynamics simulation. For this purpose the systems are represented by a discrete set of states specified by the positions, volumes, and surface elements of Voronoi grid cells distributed according to a density function resolving the often intricate relevant diffusion space. Validation tests carried out for generic diffusion spaces show that the model and the associated Brownian motion algorithm are viable over a large range of parameter values such as time step, diffusion coefficient, and grid density. A concrete application of the method is demonstrated for ion diffusion around and through the Eschericia coli mechanosensitive channel of small conductance ecMscS.
The local work function: Concept and implications
NASA Astrophysics Data System (ADS)
Wandelt, K.
1997-02-01
The term 'local work function' is now widely applied. The present work discusses the common physical basis of 'photoemission of adsorbed xenon (PAX)' and 'two-photon photonemissionspectroscopy of image potential states' as local work function probes. New examples with bimetallic and defective surfaces are presented which demonstrate the capability of PAX measurements for the characterization of heterogeneous surfaces on an atomic scale. Finally, implications of the existence of short-range variations of the surface potential at surface steps are addressed. In particular, dynamical work function change measurements are a sensitive probe for the step-density at surfaces and, as such, a powerful in-situ method to monitor film growth.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.