Structural identifiability analysis of a cardiovascular system model.
Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas
2016-05-01
The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abul Kashem, Saad Bin; Ektesabi, Mehran; Nagarajah, Romesh
2012-07-01
This study examines the uncertainties in modelling a quarter car suspension system caused by the effect of different sets of suspension parameters of a corresponding mathematical model. To overcome this problem, 11 sets of identified parameters of a suspension system have been compared, taken from the most recent published work. From this investigation, a set of parameters were chosen which showed a better performance than others in respect of peak amplitude and settling time. These chosen parameters were then used to investigate the performance of a new modified continuous skyhook control strategy with adaptive gain that dictates the vehicle's semi-active suspension system. The proposed system first captures the road profile input over a certain period. Then it calculates the best possible value of the skyhook gain (SG) for the subsequent process. Meanwhile the system is controlled according to the new modified skyhook control law using an initial or previous value of the SG. In this study, the proposed suspension system is compared with passive and other recently reported skyhook controlled semi-active suspension systems. Its performances have been evaluated in terms of ride comfort and road handling performance. The model has been validated in accordance with the international standards of admissible acceleration levels ISO2631 and human vibration perception.
NASA Astrophysics Data System (ADS)
Kehoe, S.; Stokes, J.
2011-03-01
Physicochemical properties of hydroxyapatite (HAp) synthesized by the chemical precipitation method are heavily dependent on the chosen process parameters. A Box-Behnken three-level experimental design was therefore, chosen to determine the optimum set of process parameters and their effect on various HAp characteristics. These effects were quantified using design of experiments (DoE) to develop mathematical models using the Box-Behnken design, in terms of the chemical precipitation process parameters. Findings from this research show that the HAp possessing optimum powder characteristics for orthopedic application via a thermal spray technique can therefore be prepared using the following chemical precipitation process parameters: reaction temperature 60 °C, ripening time 48 h, and stirring speed 1500 rpm using high reagent concentrations. Ripening time and stirring speed significantly affected the final phase purity for the experimental conditions of the Box-Behnken design. An increase in both the ripening time (36-48 h) and stirring speed (1200-1500 rpm) was found to result in an increase of phase purity from 47(±2)% to 85(±2)%. Crystallinity, crystallite size, lattice parameters, and mean particle size were also optimized within the research to find desired settings to achieve results suitable for FDA regulations.
Modeling Philosophies and Applications
All models begin with a framework and a set of assumptions and limitations that go along with that framework. In terms of fracing and RA, there are several places where models and parameters must be chosen to complete hazard identification.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
Process Control Strategies for Dual-Phase Steel Manufacturing Using ANN and ANFIS
NASA Astrophysics Data System (ADS)
Vafaeenezhad, H.; Ghanei, S.; Seyedein, S. H.; Beygi, H.; Mazinani, M.
2014-11-01
In this research, a comprehensive soft computational approach is presented for the analysis of the influencing parameters on manufacturing of dual-phase steels. A set of experimental data have been gathered to obtain the initial database used for the training and testing of both artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS). The parameters used in the strategy were intercritical annealing temperature, carbon content, and holding time which gives off martensite percentage as an output. A fraction of the data set was chosen to train both ANN and ANFIS, and the rest was put into practice to authenticate the act of the trained networks while seeing unseen data. To compare the obtained results, coefficient of determination and root mean squared error indexes were chosen. Using artificial intelligence methods, it is not necessary to consider and establish a preliminary mathematical model and formulate its affecting parameters on its definition. In conclusion, the martensite percentages corresponding to the manufacturing parameters can be determined prior to a production using these controlling algorithms. Although the results acquired from both ANN and ANFIS are very encouraging, the proposed ANFIS has enhanced performance over the ANN and takes better effect on cost-reduction profit.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2013-10-01
presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters . Most samples have been mechanically tested and data extracted for multiple parameters . Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model
NASA Astrophysics Data System (ADS)
Chiabrando, F.; Lingua, A.; Maschio, P.; Teppati Losè, L.
2017-02-01
The purpose of this paper is to discuss how much the phases of flight planning and the setting of the camera orientation can affect a UAVs photogrammetric survey. The test site chosen for these evaluations was the Rocca of San Silvestro, a medieval monumental castle near Livorno, Tuscany (Italy). During the fieldwork, different sets of data have been acquired using different parameters for the camera orientation and for the set up of flight plans. Acquisition with both nadiral and oblique orientation of the camera have been performed, as well as flights with different direction of the flight lines (related with the shape of the object of the survey). The different datasets were then processed in several blocks using Pix4D software and the results of the processing were analysed and compared. Our aim was to evaluate how much the parameters described above can affect the generation of the final products of the survey, in particular the product chosen for this evaluation was the point cloud.
A variant of sparse partial least squares for variable selection and data exploration.
Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina
2014-01-01
When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwasniewski, Bartosz K
The construction of reversible extensions of dynamical systems presented in a previous paper by the author and A.V. Lebedev is enhanced, so that it applies to arbitrary mappings (not necessarily with open range). It is based on calculating the maximal ideal space of C*-algebras that extends endomorphisms to partial automorphisms via partial isometric representations, and involves a new set of 'parameters' (the role of parameters is played by chosen sets or ideals). As model examples, we give a thorough description of reversible extensions of logistic maps and a classification of systems associated with compression of unitaries generating homeomorphisms of themore » circle. Bibliography: 34 titles.« less
NASA Astrophysics Data System (ADS)
Sparaciari, Carlo; Paris, Matteo G. A.
2013-01-01
We address measurement schemes where certain observables Xk are chosen at random within a set of nondegenerate isospectral observables and then measured on repeated preparations of a physical system. Each observable has a probability zk to be measured, with ∑kzk=1, and the statistics of this generalized measurement is described by a positive operator-valued measure. This kind of scheme is referred to as quantum roulettes, since each observable Xk is chosen at random, e.g., according to the fluctuating value of an external parameter. Here we focus on quantum roulettes for qubits involving the measurements of Pauli matrices, and we explicitly evaluate their canonical Naimark extensions, i.e., their implementation as indirect measurements involving an interaction scheme with a probe system. We thus provide a concrete model to realize the roulette without destroying the signal state, which can be measured again after the measurement or can be transmitted. Finally, we apply our results to the description of Stern-Gerlach-like experiments on a two-level system.
Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models
Burr, Tom
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668
Selecting summary statistics in approximate Bayesian computation for calibrating stochastic models.
Burr, Tom; Skurikhin, Alexei
2013-01-01
Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the "go-to" option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example.
Computer-Guided Deep Brain Stimulation Programming for Parkinson's Disease.
Heldman, Dustin A; Pulliam, Christopher L; Urrea Mendoza, Enrique; Gartner, Maureen; Giuffrida, Joseph P; Montgomery, Erwin B; Espay, Alberto J; Revilla, Fredy J
2016-02-01
Pilot study to evaluate computer-guided deep brain stimulation (DBS) programming designed to optimize stimulation settings using objective motion sensor-based motor assessments. Seven subjects (five males; 54-71 years) with Parkinson's disease (PD) and recently implanted DBS systems participated in this pilot study. Within two months of lead implantation, the subject returned to the clinic to undergo computer-guided programming and parameter selection. A motion sensor was placed on the index finger of the more affected hand. Software guided a monopolar survey during which monopolar stimulation on each contact was iteratively increased followed by an automated assessment of tremor and bradykinesia. After completing assessments at each setting, a software algorithm determined stimulation settings designed to minimize symptom severities, side effects, and battery usage. Optimal DBS settings were chosen based on average severity of motor symptoms measured by the motion sensor. Settings chosen by the software algorithm identified a therapeutic window and improved tremor and bradykinesia by an average of 35.7% compared with baseline in the "off" state (p < 0.01). Motion sensor-based computer-guided DBS programming identified stimulation parameters that significantly improved tremor and bradykinesia with minimal clinician involvement. Automated motion sensor-based mapping is worthy of further investigation and may one day serve to extend programming to populations without access to specialized DBS centers. © 2015 International Neuromodulation Society.
Robust and intelligent bearing estimation
Claassen, John P.
2000-01-01
A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.
C*-algebras associated with reversible extensions of logistic maps
NASA Astrophysics Data System (ADS)
Kwaśniewski, Bartosz K.
2012-10-01
The construction of reversible extensions of dynamical systems presented in a previous paper by the author and A.V. Lebedev is enhanced, so that it applies to arbitrary mappings (not necessarily with open range). It is based on calculating the maximal ideal space of C*-algebras that extends endomorphisms to partial automorphisms via partial isometric representations, and involves a new set of 'parameters' (the role of parameters is played by chosen sets or ideals). As model examples, we give a thorough description of reversible extensions of logistic maps and a classification of systems associated with compression of unitaries generating homeomorphisms of the circle. Bibliography: 34 titles.
NASA Astrophysics Data System (ADS)
Mu, Penghua; Pan, Wei; Yan, Lianshan; Luo, Bin; Zou, Xihua
2017-04-01
In this contribution, the effects of two key internal parameters, i.e. the linewidth-enhancement factor (α) and gain nonlinearity (𝜀), on time-delay signatures (TDS) concealment of two mutually-coupled semiconductor lasers (MCSLs) are numerically investigated. In particular, the influences of α and 𝜀 on the TDS concealment are compared and discussed systematically by setting different values of frequency detuning (Δf) and injection strength (η). The results show that the TDS can be better suppressed with high α or lower 𝜀 in the MCSLs. Two sets of desired optical chaos with TDS being strongly suppressed can be generated simultaneously in a wide injection parameter plane provided that α and 𝜀 are properly chosen, indicating that optimizing TDS suppression through controlling internal parameters can be generalized to any delayed-coupled laser systems.
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2014-04-01
Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in the following...have been harvested. All harvested samples have been scanned by µCT and evaluated for multiple parameters . All samples have been mechanically... Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model. J.Biomed.Mater.Res.B Appl.Biomater. 2005;74(2):712-7. 4. De Ranieri, A., Virdi, A. S
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1995-01-01
The basic design considerations for perspective Air Traffic Control displays are described. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. The MVPS system is based on indirect manipulation of the viewing parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of screen. This arrangement has been chosen, in order to preserve the correspondence between the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer generated scene. Current, ongoing efforts deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the Air Traffic Control scene can be viewed, for a given traffic situation.
Phase transition in the countdown problem
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Luque, Bartolo
2012-07-01
We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.
Atomistic modeling of metallic thin films by modified embedded atom method
NASA Astrophysics Data System (ADS)
Hao, Huali; Lau, Denvid
2017-11-01
Molecular dynamics simulation is applied to investigate the deposition process of metallic thin films. Eight metals, titanium, vanadium, iron, cobalt, nickel, copper, tungsten, and gold, are chosen to be deposited on the aluminum substrate. The second nearest-neighbor modified embedded atom method potential is adopted to predict their thermal and mechanical properties. When quantifying the screening parameters of the potential, the error for Young's modulus and coefficient of thermal expansion between the simulated results and the experimental measurements is less than 15%, demonstrating the reliability of the potential to predict metallic behaviors related to thermal and mechanical properties. A set of potential parameters which governs the interactions between aluminum and other metals in a binary system is also generated from ab initio calculation. The details of interfacial structures between the chosen films and substrate are successfully simulated with the help of these parameters. Our results indicate that the preferred orientation of film growth depends on the film crystal structure, and the inter-diffusion at the interface is correlated the cohesive energy parameter of potential for the binary system. Such finding provides an important basis to further understand the interfacial science, which contributes to the improvement of the mechanical properties, reliability and durability of films.
Finite Nuclei in the Quark-Meson Coupling Model.
Stone, J R; Guichon, P A M; Reinhard, P G; Thomas, A W
2016-03-04
We report the first use of the effective quark-meson coupling (QMC) energy density functional (EDF), derived from a quark model of hadron structure, to study a broad range of ground state properties of even-even nuclei across the periodic table in the nonrelativistic Hartree-Fock+BCS framework. The novelty of the QMC model is that the nuclear medium effects are treated through modification of the internal structure of the nucleon. The density dependence is microscopically derived and the spin-orbit term arises naturally. The QMC EDF depends on a single set of four adjustable parameters having a clear physics basis. When applied to diverse ground state data the QMC EDF already produces, in its present simple form, overall agreement with experiment of a quality comparable to a representative Skyrme EDF. There exist, however, multiple Skyrme parameter sets, frequently tailored to describe selected nuclear phenomena. The QMC EDF set of fewer parameters, derived in this work, is not open to such variation, chosen set being applied, without adjustment, to both the properties of finite nuclei and nuclear matter.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
Advanced interactive display formats for terminal area traffic control
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.
1996-01-01
This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.
Interactive Database of Pulsar Flux Density Measurements
NASA Astrophysics Data System (ADS)
Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.
2012-12-01
The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.
IPO: a tool for automated optimization of XCMS parameters.
Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph
2015-04-16
Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .
NASA Astrophysics Data System (ADS)
Zhao, Wenyu; Zhang, Haiyi; Ji, Yuefeng; Xu, Daxiong
2004-05-01
Based on the proposed polarization mode dispersion (PMD) compensation simulation model and statistical analysis method (Monte-Carlo), the critical parameters initialization of two typical optical domain PMD compensators, which include optical PMD method with fixed compensation differential group delay (DGD) and that with variable compensation DGD, are detailedly investigated by numerical method. In the simulation, the line PMD values are chosen as 3ps, 4ps and 5ps and run samples are set to 1000 in order to achieve statistical evaluation for PMD compensated systems, respectively. The simulation results show that for the PMD value pre-known systems, the value of the fixed DGD compensator should be set to 1.5~1.6 times of line PMD value in order to reach the optimum performance, but for the second kind of PMD compensator, the DGD range of lower limit should be 1.5~1.6 times of line PMD provided that of upper limit is set to 3 times of line PMD, if no effective ways are chosen to resolve the problem of local minimum in optimum process. Another conclusion can be drawn from the simulation is that, although the second PMD compensator holds higher PMD compensation performance, it will spend more feedback loops to look up the optimum DGD value in the real PMD compensation realization, and this will bring more requirements on adjustable DGD device, not only wider adjustable range, but rapid adjusting speed for real time PMD equalization.
NASA Astrophysics Data System (ADS)
Jeziorska, Justyna; Niedzielski, Tomasz
2018-03-01
River basins located in the Central Sudetes (SW Poland) demonstrate a high vulnerability to flooding. Four mountainous basins and the corresponding outlets have been chosen for modeling the streamflow dynamics using TOPMODEL, a physically based semi-distributed topohydrological model. The model has been calibrated using the Monte Carlo approach—with discharge, rainfall, and evapotranspiration data used to estimate the parameters. The overall performance of the model was judged by interpreting the efficiency measures. TOPMODEL was able to reproduce the main pattern of the hydrograph with acceptable accuracy for two of the investigated catchments. However, it failed to simulate the hydrological response in the remaining two catchments. The best performing data set obtained Nash-Sutcliffe efficiency of 0.78. This data set was chosen to conduct a detailed analysis aiming to estimate the optimal timespan of input data for which TOPMODEL performs best. The best fit was attained for the half-year time span. The model was validated and found to reveal good skills.
Forecasting solar proton event with artificial neural network
NASA Astrophysics Data System (ADS)
Gong, J.; Wang, J.; Xue, B.; Liu, S.; Zou, Z.
Solar proton event (SPE), relatively rare but popular in solar maximum, can bring hazard situation to spacecraft. As a special event, SPE always accompanies flare, which is also called proton flare. To produce such an eruptive event, large amount energy must be accumulated within the active region. So we can investigate the character of the active region and its evolving trend, together with other such as cm radio emission and soft X-ray background to evaluate the potential of SEP in chosen area. In order to summarize the omen of SPEs in the active regions behind the observed parameters, we employed AI technology. Full connecting neural network was chosen to fulfil this job. After constructing the network, we train it with 13 parameters that was able to exhibit the character of active regions and their evolution trend. More than 80 sets of event parameter were defined to teach the neural network to identify whether an active region was potential of SPE. Then we test this model with a data base consisting SPE and non-SPE cases that was not used to train the neural network. The result showed that 75% of the choice by the model was right.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
NASA Astrophysics Data System (ADS)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; Upadhye, Amol; Bingham, Derek; Habib, Salman; Higdon, David; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas
2017-09-01
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k˜ 5 Mpc-1 and redshift z≤slant 2. In addition to covering the standard set of ΛCDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations and TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve ˜ 1 % accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k similar to 5 Mpc(-1) and redshift z <= 2. In addition to covering the standard set of Lambda CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with 16 medium-resolution simulations andmore » TimeRG perturbation theory results to provide accurate coverage over a wide k-range; the data set generated as part of this project is more than 1.2Pbytes. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-up results with more than a hundred cosmological models will soon achieve similar to 1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches.« less
FAST TRACK COMMUNICATION Initial data for the relativistic gravitational N-body problem
NASA Astrophysics Data System (ADS)
Chruściel, Piotr T.; Corvino, Justin; Isenberg, James
2010-11-01
In general relativity, an initial data set for an isolated gravitational system takes the form of a solution of the Einstein constraint equations which is asymptotically Euclidean on a specified end. Given a collection of N such data sets with a subregion of interest (bounded away from the specified end) chosen in each, we show that there exists a family of new initial data sets, each of which contains exact copies of each of the N chosen subregions, positioned in a chosen array in a single asymptotic end. These composite initial data sets model isolated, relativistic gravitational systems containing N chosen bodies in specified initial configurations.
Analysis of aerobic granular sludge formation based on grey system theory.
Zhang, Cuiya; Zhang, Hanmin
2013-04-01
Based on grey entropy analysis, the relational grade of operational parameters with aerobic granular sludge's granulation indicators was studied. The former consisted of settling time (ST), aeration time (AT), superficial gas velocity (SGV), height/diameter (H/D) ratio and organic loading rates (OLR), the latter included sludge volume index (SVI) and set-up time. The calculated result showed that for SVI and set-up time, the influence orders and the corresponding grey entropy relational grades (GERG) were: SGV (0.9935) > AT (0.9921) > OLR (0.9894) > ST (0.9876) > H/D (0.9857) and SGV (0.9928) > H/D (0.9914) > AT (0.9909) > OLR (0.9897) > ST (0.9878). The chosen parameters were all key impact factors as each GERG was larger than 0.98. SGV played an important role in improving SVI transformation and facilitating the set-up process. The influence of ST on SVI and set-up time was relatively low due to its dual functions. SVI transformation and rapid set-up demanded different optimal H/D ratio scopes (10-20 and 16-20). Meanwhile, different functions could be obtained through adjusting certain factors' scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 6 2011-07-01 2011-07-01 false How do I establish a valid parameter range if I have chosen to continuously monitor parameters? 60.4410 Section 60.4410 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of...
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
Cappelli Fontanive, Fernando; Souza-Silva, Érica Aparecida; Macedo da Silva, Juliana; Bastos Caramão, Elina; Alcaraz Zini, Claudia
2016-08-26
Diesel and naphtha samples were analyzed using ionic liquid (IL) columns to evaluate the best column set for the investigation of organic sulfur compounds (OSC) and nitrogen(N)-containing compounds analyses with comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry detector (GC×GC/TOFMS). Employing a series of stationary phase sets, namely DB-5MS/DB-17, DB-17/DB-5MS, DB-5MS/IL-59, and IL-59/DB-5MS, the following parameters were systematically evaluated: number of tentatively identified OSC, 2D chromatographic space occupation, number of polyaromatic hydrocarbons (PAH) and OSC co-elutions, and percentage of asymmetric peaks. DB-5MS/IL-59 was chosen for OSC analysis, while IL59/DB-5MS was chosen for nitrogen compounds, as each stationary phase set provided the best chromatographic efficiency for these two classes of compounds, respectively. Most compounds were tentatively identified by Lee and Van den Dool and Kratz retention indexes, and spectra-matching to library. Whenever available, compounds were also positively identified via injection of authentic standards. Copyright © 2016 Elsevier B.V. All rights reserved.
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
NASA Astrophysics Data System (ADS)
Cisneros, Sophia
2013-04-01
We present a new, heuristic, two-parameter model for predicting the rotation curves of disc galaxies. The model is tested on (22) randomly chosen galaxies, represented in 35 data sets. This Lorentz Convolution [LC] model is derived from a non-linear, relativistic solution of a Kerr-type wave equation, where small changes in the photon's frequencies, resulting from the curved space time, are convolved into a sequence of Lorentz transformations. The LC model is parametrized with only the diffuse, luminous stellar and gaseous masses reported with each data set of observations used. The LC model predicts observed rotation curves across a wide range of disk galaxies. The LC model was constructed to occupy the same place in the explanation of rotation curves that Dark Matter does, so that a simple investigation of the relation between luminous and dark matter might be made, via by a parameter (a). We find the parameter (a) to demonstrate interesting structure. We compare the new model prediction to both the NFW model and MOND fits when available.
Benchmarking Ada tasking on tightly coupled multiprocessor architectures
NASA Technical Reports Server (NTRS)
Collard, Philippe; Goforth, Andre; Marquardt, Matthew
1989-01-01
The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.
NASA Astrophysics Data System (ADS)
Şeker, Cevdet; Hüseyin Özaytekin, Hasan; Negiş, Hamza; Gümüş, İlknur; Dedeoğlu, Mert; Atmaca, Emel; Karaca, Ümmühan
2017-05-01
Sustainable agriculture largely depends on soil quality. The evaluation of agricultural soil quality is essential for economic success and environmental stability in rapidly developing regions. In this context, a wide variety of methods using vastly different indicators are currently used to evaluate soil quality. This study was conducted in one of the most important irrigated agriculture areas of Konya in central Anatolia, Turkey, to analyze the soil quality indicators of Çumra County in combination with an indicator selection method, with the minimum data set using a total of 38 soil parameters. We therefore determined a minimum data set with principle component analysis to assess soil quality in the study area and soil quality was evaluated on the basis of a scoring function. From the broad range of soil properties analyzed, the following parameters were chosen: field capacity, bulk density, aggregate stability, and permanent wilting point (from physical soil properties); electrical conductivity, Mn, total nitrogen, available phosphorus, pH, and NO3-N (from chemical soil properties); and urease enzyme activity, root health value, organic carbon, respiration, and potentially mineralized nitrogen (from biological properties). According to the results, the chosen properties were found as the most sensitive indicators of soil quality and they can be used as indicators for evaluating and monitoring soil quality at a regional scale.
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
Kinematics of our Galaxy from the PMA and TGAS catalogues
NASA Astrophysics Data System (ADS)
Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.
2018-04-01
We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.
Effect of electric potential and current on mandibular linear measurements in cone beam CT.
Panmekiate, S; Apinhasmit, W; Petersson, A
2012-10-01
The purpose of this study was to compare mandibular linear distances measured from cone beam CT (CBCT) images produced by different radiographic parameter settings (peak kilovoltage and milliampere value). 20 cadaver hemimandibles with edentulous ridges posterior to the mental foramen were embedded in clear resin blocks and scanned by a CBCT machine (CB MercuRay(TM); Hitachi Medico Technology Corp., Chiba-ken, Japan). The radiographic parameters comprised four peak kilovoltage settings (60 kVp, 80 kVp, 100 kVp and 120 kVp) and two milliampere settings (10 mA and 15 mA). A 102.4 mm field of view was chosen. Each hemimandible was scanned 8 times with 8 different parameter combinations resulting in 160 CBCT data sets. On the cross-sectional images, six linear distances were measured. To assess the intraobserver variation, the 160 data sets were remeasured after 2 weeks. The measurement precision was calculated using Dahlberg's formula. With the same peak kilovoltage, the measurements yielded by different milliampere values were compared using the paired t-test. With the same milliampere value, the measurements yielded by different peak kilovoltage were compared using analysis of variance. A significant difference was considered when p < 0.05. Measurement precision varied from 0.03 mm to 0.28 mm. No significant differences in the distances were found among the different radiographic parameter combinations. Based upon the specific machine in the present study, low peak kilovoltage and milliampere value might be used for linear measurements in the posterior mandible.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Modeling laser beam diffraction and propagation by the mode-expansion method.
Snyder, James J
2007-08-01
In the mode-expansion method for modeling propagation of a diffracted beam, the beam at the aperture can be expanded as a weighted set of orthogonal modes. The parameters of the expansion modes are chosen to maximize the weighting coefficient of the lowest-order mode. As the beam propagates, its field distribution can be reconstructed from the set of weighting coefficients and the Gouy phase of the lowest-order mode. We have developed a simple procedure to implement the mode-expansion method for propagation through an arbitrary ABCD matrix, and we have demonstrated that it is accurate in comparison with direct calculations of diffraction integrals and much faster.
Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso
2016-11-01
This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).
Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso
2016-01-01
This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana; ...
2017-09-20
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
The Mira-Titan Universe. II. Matter Power Spectrum Emulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Earl; Heitmann, Katrin; Kwan, Juliana
We introduce a new cosmic emulator for the matter power spectrum covering eight cosmological parameters. Targeted at optical surveys, the emulator provides accurate predictions out to a wavenumber k ~ 5Mpc -1 and redshift z ≤ 2. Besides covering the standard set of CDM parameters, massive neutrinos and a dynamical dark energy of state are included. The emulator is built on a sample set of 36 cosmological models, carefully chosen to provide accurate predictions over the wide and large parameter space. For each model, we have performed a high-resolution simulation, augmented with sixteen medium-resolution simulations and TimeRG perturbation theory resultsmore » to provide accurate coverage of a wide k-range; the dataset generated as part of this project is more than 1.2Pbyte. With the current set of simulated models, we achieve an accuracy of approximately 4%. Because the sampling approach used here has established convergence and error-control properties, follow-on results with more than a hundred cosmological models will soon achieve ~1% accuracy. We compare our approach with other prediction schemes that are based on halo model ideas and remapping approaches. The new emulator code is publicly available.« less
Fitting Nonlinear Curves by use of Optimization Techniques
NASA Technical Reports Server (NTRS)
Hill, Scott A.
2005-01-01
MULTIVAR is a FORTRAN 77 computer program that fits one of the members of a set of six multivariable mathematical models (five of which are nonlinear) to a multivariable set of data. The inputs to MULTIVAR include the data for the independent and dependent variables plus the user s choice of one of the models, one of the three optimization engines, and convergence criteria. By use of the chosen optimization engine, MULTIVAR finds values for the parameters of the chosen model so as to minimize the sum of squares of the residuals. One of the optimization engines implements a routine, developed in 1982, that utilizes the Broydon-Fletcher-Goldfarb-Shanno (BFGS) variable-metric method for unconstrained minimization in conjunction with a one-dimensional search technique that finds the minimum of an unconstrained function by polynomial interpolation and extrapolation without first finding bounds on the solution. The second optimization engine is a faster and more robust commercially available code, denoted Design Optimization Tool, that also uses the BFGS method. The third optimization engine is a robust and relatively fast routine that implements the Levenberg-Marquardt algorithm.
Butson, Christopher R.; Tamm, Georg; Jain, Sanket; Fogal, Thomas; Krüger, Jens
2012-01-01
In recent years there has been significant growth in the use of patient-specific models to predict the effects of neuromodulation therapies such as deep brain stimulation (DBS). However, translating these models from a research environment to the everyday clinical workflow has been a challenge, primarily due to the complexity of the models and the expertise required in specialized visualization software. In this paper, we deploy the interactive visualization system ImageVis3D Mobile, which has been designed for mobile computing devices such as the iPhone or iPad, in an evaluation environment to visualize models of Parkinson’s disease patients who received DBS therapy. Selection of DBS settings is a significant clinical challenge that requires repeated revisions to achieve optimal therapeutic response, and is often performed without any visual representation of the stimulation system in the patient. We used ImageVis3D Mobile to provide models to movement disorders clinicians and asked them to use the software to determine: 1) which of the four DBS electrode contacts they would select for therapy; and 2) what stimulation settings they would choose. We compared the stimulation protocol chosen from the software versus the stimulation protocol that was chosen via clinical practice (independently of the study). Lastly, we compared the amount of time required to reach these settings using the software versus the time required through standard practice. We found that the stimulation settings chosen using ImageVis3D Mobile were similar to those used in standard of care, but were selected in drastically less time. We show how our visualization system, available directly at the point of care on a device familiar to the clinician, can be used to guide clinical decision making for selection of DBS settings. In our view, the positive impact of the system could also translate to areas other than DBS. PMID:22450824
He, Wangli; Qian, Feng; Han, Qing-Long; Cao, Jinde
2012-10-01
This paper investigates the problem of master-slave synchronization of two delayed Lur'e systems in the presence of parameter mismatches. First, by analyzing the corresponding synchronization error system, synchronization with an error level, which is referred to as quasi-synchronization, is established. Some delay-dependent quasi-synchronization criteria are derived. An estimation of the synchronization error bound is given, and an explicit expression of error levels is obtained. Second, sufficient conditions on the existence of feedback controllers under a predetermined error level are provided. The controller gains are obtained by solving a set of linear matrix inequalities. Finally, a delayed Chua's circuit is chosen to illustrate the effectiveness of the derived results.
An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography
NASA Technical Reports Server (NTRS)
Ivancic, Will (Technical Monitor); Eddy, Wesley M.
2005-01-01
Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.
A comparative study of electrochemical machining process parameters by using GA and Taguchi method
NASA Astrophysics Data System (ADS)
Soni, S. K.; Thomas, B.
2017-11-01
In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.
Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2017-03-14
The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .
Anti AIDS drug design with the help of neural networks
NASA Astrophysics Data System (ADS)
Tetko, I. V.; Tanchuk, V. Yu.; Luik, A. I.
1995-04-01
Artificial neural networks were used to analyze and predict the human immunodefiency virus type 1 reverse transcriptase inhibitors. Training and control set included 44 molecules (most of them are well-known substances such as AZT, TIBO, dde, etc.) The biological activities of molecules were taken from literature and rated for two classes: active and inactive compounds according to their values. We used topological indices as molecular parameters. Four most informative parameters (out of 46) were chosen using cluster analysis and original input parameters' estimation procedure and were used to predict activities of both control and new (synthesized in our institute) molecules. We applied pruning network algorithm and network ensembles to obtain the final classifier and avoid chance correlation. The increasing of neural network generalization of the data from the control set was observed, when using the aforementioned methods. The prognosis of new molecules revealed one molecule as possibly active. It was confirmed by further biological tests. The compound was as active as AZT and in order less toxic. The active compound is currently being evaluated in pre clinical trials as possible drug for anti-AIDS therapy.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Improving information retrieval in functional analysis.
Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A
2016-12-01
Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ho Hoang, Khai-Long; Mombaur, Katja
2015-10-15
Dynamic modeling of the human body is an important tool to investigate the fundamentals of the biomechanics of human movement. To model the human body in terms of a multi-body system, it is necessary to know the anthropometric parameters of the body segments. For young healthy subjects, several data sets exist that are widely used in the research community, e.g. the tables provided by de Leva. None such comprehensive anthropometric parameter sets exist for elderly people. It is, however, well known that body proportions change significantly during aging, e.g. due to degenerative effects in the spine, such that parameters for young people cannot be used for realistically simulating the dynamics of elderly people. In this study, regression equations are derived from the inertial parameters, center of mass positions, and body segment lengths provided by de Leva to be adjustable to the changes in proportion of the body parts of male and female humans due to aging. Additional adjustments are made to the reference points of the parameters for the upper body segments as they are chosen in a more practicable way in the context of creating a multi-body model in a chain structure with the pelvis representing the most proximal segment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego
2014-06-01
The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.
Enabling Computational Nanotechnology through JavaGenes in a Cycle Scavenging Environment
NASA Technical Reports Server (NTRS)
Globus, Al; Menon, Madhu; Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A genetic algorithm procedure is developed and implemented for fitting parameters for many-body inter-atomic force field functions for simulating nanotechnology atomistic applications using portable Java on cycle-scavenged heterogeneous workstations. Given a physics based analytic functional form for the force field, correlated parameters in a multi-dimensional environment are typically chosen to fit properties given either by experiments and/or by higher accuracy quantum mechanical simulations. The implementation automates this tedious procedure using an evolutionary computing algorithm operating on hundreds of cycle-scavenged computers. As a proof of concept, we demonstrate the procedure for evaluating the Stillinger-Weber (S-W) potential by (a) reproducing the published parameters for Si using S-W energies in the fitness function, and (b) evolving a "new" set of parameters using semi-empirical tightbinding energies in the fitness function. The "new" parameters are significantly better suited for Si cluster energies and forces as compared to even the published S-W potential.
TADtool: visual parameter identification for TAD-calling algorithms.
Kruse, Kai; Hug, Clemens B; Hernández-Rodríguez, Benjamín; Vaquerizas, Juan M
2016-10-15
Eukaryotic genomes are hierarchically organized into topologically associating domains (TADs). The computational identification of these domains and their associated properties critically depends on the choice of suitable parameters of TAD-calling algorithms. To reduce the element of trial-and-error in parameter selection, we have developed TADtool: an interactive plot to find robust TAD-calling parameters with immediate visual feedback. TADtool allows the direct export of TADs called with a chosen set of parameters for two of the most common TAD calling algorithms: directionality and insulation index. It can be used as an intuitive, standalone application or as a Python package for maximum flexibility. TADtool is available as a Python package from GitHub (https://github.com/vaquerizaslab/tadtool) or can be installed directly via PyPI, the Python package index (tadtool). kai.kruse@mpi-muenster.mpg.de, jmv@mpi-muenster.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R
2006-01-01
This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.
Delayed neutron spectral data for Hansen-Roach energy group structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J.M.; Spriggs, G.D.
A detailed knowledge of delayed neutron spectra is important in reactor physics. It not only allows for an accurate estimate of the effective delayed neutron fraction {beta}{sub eff} but also is essential to calculating important reactor kinetic parameters, such as effective group abundances and the ratio of {beta}{sub eff} to the prompt neutron generation time. Numerous measurements of delayed neutron spectra for various delayed neutron precursors have been performed and reported in the literature. However, for application in reactor physics calculations, these spectra are usually lumped into one of the traditional six groups of delayed neutrons in accordance to theirmore » half-lives. Subsequently, these six-group spectra are binned into energy intervals corresponding to the energy intervals of a chosen nuclear cross-section set. In this work, the authors present a set of delayed neutron spectra that were formulated specifically to match Keepin`s six-group parameters and the 16-energy-group Hansen-Roach cross sections.« less
PID Tuning Using Extremum Seeking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killingsworth, N; Krstic, M
2005-11-15
Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less
NASA Astrophysics Data System (ADS)
Nesterova, Natalia; Semenova, Olga; Lebedeva, Luidmila
2015-04-01
Large territories of Siberia and Russian Far East are the subject to frequent forest fires. Often there is no information available about fire impact except its timing, areal distribution and qualitative characteristics of fire severity. Observed changes of hydrological response in burnt watersheds can be considered as indirect evidence of soil and vegetation transformation due to fire impact. In our study we used MODIS Fire products to detect spatial distribution of fires in Transbaikal and Far East regions of Russia in 2000 - 2012 period. Small and middle-size watersheds (with area up to 10000 km2) affected by extensive (burn area not less than 20 %) fires were chosen. We analyzed available hydrological data (measured discharges in watersheds outlets) for chosen basins. In several cases apparent hydrological response to fire was detected. To investigate main factors causing the change of hydrologic regime after fire several scenarios of soil and vegetation transformation were developed for each watershed under consideration. Corresponding sets of hydrological model parameters describing those transformations were elaborated based on data analysis and post-fire landscape changes as derived from a literature review. We implied different factors such as removal of organic layer, albedo changes, intensification of soil thaw (in presence of permafrost and seasonal soil freezing), reduction of infiltration rate and evapotranspiration, increase of upper subsurface flow fraction in summer flood events following the fire and others. We applied Hydrograph model (Russia) to conduct simulation experiments aiming to reveal which landscape changes scenarios were more plausible. The advantages of chosen hydrological model for this study are 1) that it takes into consideration thermal processes in soils which in case of permafrost and seasonal soil freezing presence can play leading role in runoff formation and 2) that observable vegetation and soil properties are used as its parameters allowing minimal resort to calibration. The model can use dynamic set of parameters performing preassigned abrupt and/or gradual changes of landscape characteristics. Interestingly, based on modelling results it can be concluded that depending on dominant landscape different aspects of soil and vegetation cover changes may influence runoff formation in contrasting way. The results of the study will be reported.
Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca
2012-01-01
The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms.
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir
2009-11-01
Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.
NASA Astrophysics Data System (ADS)
Lupu, R.; Marley, M. S.; Lewis, N. K.
2015-12-01
We have assembled an atmospheric retrieval package for the reflected light spectra of gas- and ice- giants in order to inform the design and estimate the scientific return of future space-based coronagraph instruments. Such instruments will have a working bandpass of ~0.4-1 μm and a resolving power R~70, and will enable the characterization of tens of exoplanets in the Solar neighborhood. The targets will be chosen form known RV giants, with estimated effective temperatures of ~100-600 K and masses between 0.3 and 20 MJupiter. In this regime, both methane and clouds will have the largest effects on the observed spectra. Our retrieval code is the first to include cloud properties in the core set of parameters, along with methane abundance and surface gravity. We consider three possible cloud structure scenarios, with 0, 1 or 2 cloud layers, respectively. The best-fit parameters for a given model are determined using a Monte Carlo Markov Chain ensemble sampler, and the most favored cloud structure is chosen by calculating the Bayes factors between different models. We present the performance of our retrieval technique applied to a set of representative model spectra, covering a SNR range form 5 to 20 and including possible noise correlations over a 25 or 100 nanometer scale. Further, we apply the technique to more realistic cases, namely simulated observations of Jupiter, Saturn, Uranus, and the gas-giant HD99492c. In each case, we determine the confidence levels associated with the methane and cloud detections, as a function of SNR and noise properties.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Analyzing chromatographic data using multilevel modeling.
Wiczling, Paweł
2018-06-01
It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.
Indications of a late-time interaction in the dark sector.
Salvatelli, Valentina; Said, Najla; Bruni, Marco; Melchiorri, Alessandro; Wands, David
2014-10-31
We show that a general late-time interaction between cold dark matter and vacuum energy is favored by current cosmological data sets. We characterize the strength of the coupling by a dimensionless parameter q(V) that is free to take different values in four redshift bins from the primordial epoch up to today. This interacting scenario is in agreement with measurements of cosmic microwave background temperature anisotropies from the Planck satellite, supernovae Ia from Union 2.1 and redshift space distortions from a number of surveys, as well as with combinations of these different data sets. Our analysis of the 4-bin interaction shows that a nonzero interaction is likely at late times. We then focus on the case q(V)≠0 in a single low-redshift bin, obtaining a nested one parameter extension of the standard ΛCDM model. We study the Bayesian evidence, with respect to ΛCDM, of this late-time interaction model, finding moderate evidence for an interaction starting at z=0.9, dependent upon the prior range chosen for the interaction strength parameter q(V). For this case the null interaction (q(V)=0, i.e., ΛCDM) is excluded at 99% C.L.
Neural network approach for the calculation of potential coefficients in quantum mechanics
NASA Astrophysics Data System (ADS)
Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.
2017-05-01
A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.
A Deep Stochastic Model for Detecting Community in Complex Networks
NASA Astrophysics Data System (ADS)
Fu, Jingcheng; Wu, Jianliang
2017-01-01
Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.
McKisson, John E.; Barbosa, Fernando
2015-09-01
A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Imposition of physical parameters in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Mai-Duy, N.; Phan-Thien, N.; Tran-Cong, T.
2017-12-01
In the mesoscale simulations by the dissipative particle dynamics (DPD), the motion of a fluid is modelled by a set of particles interacting in a pairwise manner, and it has been shown to be governed by the Navier-Stokes equation, with its physical properties, such as viscosity, Schmidt number, isothermal compressibility, relaxation and inertia time scales, in fact its whole rheology resulted from the choice of the DPD model parameters. In this work, we will explore the response of a DPD fluid with respect to its parameter space, where the model input parameters can be chosen in advance so that (i) the ratio between the relaxation and inertia time scales is fixed; (ii) the isothermal compressibility of water at room temperature is enforced; and (iii) the viscosity and Schmidt number can be specified as inputs. These impositions are possible with some extra degrees of freedom in the weighting functions for the conservative and dissipative forces. Numerical experiments show an improvement in the solution quality over conventional DPD parameters/weighting functions, particularly for the number density distribution and computed stresses.
Implementation of RS-485 Communication between PLC and PC of Distributed Control System Based on VB
NASA Astrophysics Data System (ADS)
Lian Zhang, Chuan; Da Huang, Zhi; Qing Zhou, Gui; Chong, Kil To
2015-05-01
This paper focuses on achieving RS-485 communication between programmable logical controller (PLC) and PC based on visual basic 6.0 (VB6.0) on an experimental automatic production line. Mitsubishi FX2N PLCs and a PC are chosen as slave stations and main station, respectively. Monitoring software is developed using VB6.0 for data input/output, flow control and online parameters setting. As a result, all functions are fulfilled with robust performance. It is concluded from results that one PC can monitor several PLCs using RS-485 communication.
Vegetation and other parameters in the Brevard County bar-built estuaries
NASA Technical Reports Server (NTRS)
Down, C.; Withrow, R. (Editor)
1978-01-01
It is shown that low-altitude aerial photography, using specific interpretive techniques, can effectively delineate sea-grass beds, oyster beds, and other underwater features. Various techniques were used on several sets of aerial imagery. Imagery was tested using several data analysis methods, ground truth, and biological testing. Approximately 45,000 acres of grass beds, 2,500 acres of oyster beds, and 4,200 acres of dredged canals were mapped. This data represents selected sites only. Areas chosen have the highest quality water in Brevard County and are among the most highly recognized biologically productive waters in Florida.
Electronic properties of long DNA nanowires in dry and wet conditions
NASA Astrophysics Data System (ADS)
Mousavi, Hamze; Khodadadi, Jabbar; Grabowski, Marek
2015-11-01
The electronic behavior of the long disordered DNA nanowires in both dry and wet conditions is investigated through the band structure and density of states of a tight-binding Hamiltonian model for π-electrons of the backbone, using Green's functions approach. For a chosen set of parameters in the dry case, semiconducting behavior is reproduced. It is also shown that for sufficiently long strands, the order of the base pairs has no noticeable effect on the energy band-gap. Moreover, this semiconducting duplex shows metallic tendencies when interacting with the environment of polar molecules.
Spatial sensitivity of inorganic carbon to model setup: North Sea and Baltic Sea with ECOSMO
NASA Astrophysics Data System (ADS)
Castano Primo, Rocio; Schrum, Corinna; Daewel, Ute
2015-04-01
In ocean biogeochemical models it is critical to capture the key processes adequately so they do not only reproduce the observations but that those processes are reproduced correctly. One key issue is the choice of parameters, which in most cases are estimates with large uncertainties. This can be the product of actual lack of detailed knowledge of the process, or the manner the processes are implemented, more or less complex. In addition, the model sensitivity is not necessarily homogenous across the spatial domain modelled, which adds another layer of complexity to biogeochemical modelling. In the particular case of the inorganic carbon cycle, there are several sets of carbonate constants that can be chosen. The calculated air-sea CO2 flux is largely dependent on the parametrization chosen. In addition, the different parametrizations all the underlying processes that in some way impact the carbon cycle beyond the carbonate dissociation and fluxes give results that can be significantly different. Examples of these processes are phytoplankton growth rates or remineralization rates. Despite their geographical proximity, the North and Baltic Seas exhibit very different dynamics. The North Sea receives important inflows of Atlantic waters, while the Baltic Sea is an almost enclosed system, with very little exchange from the North Sea. Wind, tides, and freshwater supply act very differently, but dominantly structure the ecosystem dynamics on spatial and temporal scales. The biological community is also different. Cyanobacteria, which are important due to their ability to fix atmospheric nitrogen, and they are only present in the Baltic Sea. These differentiating features have a strong impact in the biogeochemical cycles and ultimately shape the variations in the carbonate chemistry. Here the ECOSMO model was employed on the North Sea and Baltic Sea. The model is set so both are modelled at the same time, instead of having them run separately. ECOSMO is a 3-D coupled physical-biogeochemical model, which resolves the cycles of nitrogen, phosphorus and silicate. It includes 3 functional groups of phytoplankton and 2 groups of zooplankton. In addition, an inorganic carbon module has been incorporated and coupled. Alkalinity and DIC are chosen as prognostic variables, from which pH, pCO2 and air-sea CO2 flux are calculated. The model is run with different sets of carbonate dissociation parameters, air-sea flux parametrizations, phytoplankton growth and remineralization rates. The sensitivity of the inorganic carbon variables will be assessed, both in the whole model domain and the North and Baltic Sea independently. We search for the critical parameters that have a larger impact, whether such impact is spatially dependent and the effect on the validation of the carbonate module.
Modeling maintenance-strategies with rainbow nets
NASA Astrophysics Data System (ADS)
Johnson, Allen M., Jr.; Schoenfelder, Michael A.; Lebold, David
The Rainbow net (RN) modeling technique offers a promising alternative to traditional reliability modeling techniques. RNs are evaluated through discrete event simulation. Using specialized tokens to represent systems and faults, an RN models the fault-handling behavior of an inventory of systems produced over time. In addition, a portion of the RN represents system repair and the vendor's spare part production. Various dependability parameters are measured and used to calculate the impact of four variations of maintenance strategies. Input variables are chosen to demonstrate the technique. The number of inputs allowed to vary is intentionally constrained to limit the volume of data presented and to avoid overloading the reader with complexity. If only availability data were reviewed, it is possible that the conclusion might be drawn that both strategies are about the same and therefore the cheaper strategy from the vendor's perspective may be chosen. The richer set of metrics provided by the RN simulation gives greater insight into the problem, which leads to better decisions. By using RNs, the impact of several different variables is integrated.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
NASA Astrophysics Data System (ADS)
Tesinova, P.; Steklova, P.; Duchacova, T.
2017-10-01
Materials for outdoor activities are produced in various combinations and lamination helps to combine two or more components for gaining high comfort properties and lighten the structure. Producers can choose exact suitable material for construction of part or set of so called layered clothing for expected activity. Decreasing the weight of materials when preserving of high quality of water-vapour permeability, wind resistivity and hydrostatic resistivity and other comfort and usage properties is a big task nowadays. This paper is focused on thermal properties as an important parameter for being comfort during outdoor activities. Softshell materials were chosen for testing and computation of clo. Results compared with standardised clo table helps us to classify thermal insulation of the set of fabrics when defining proper clothing category.
NASA Astrophysics Data System (ADS)
Rybizki, Jan; Just, Andreas; Rix, Hans-Walter
2017-09-01
Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar nucleosynthesis with far more complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.
3D tomographic reconstruction using geometrical models
NASA Astrophysics Data System (ADS)
Battle, Xavier L.; Cunningham, Gregory S.; Hanson, Kenneth M.
1997-04-01
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
Carbon Nanotubes as FET Channel: Analog Design Optimization considering CNT Parameter Variability
NASA Astrophysics Data System (ADS)
Samar Ansari, Mohd.; Tripathi, S. K.
2017-08-01
Carbon nanotubes (CNTs), both single-walled as well as multi-walled, have been employed in a plethora of applications pertinent to semiconductor materials and devices including, but not limited to, biotechnology, material science, nanoelectronics and nano-electro mechanical systems (NEMS). The Carbon Nanotube Field Effect Transistor (CNFET) is one such electronic device which effectively utilizes CNTs to achieve a boost in the channel conduction thereby yielding superior performance over standard MOSFETs. This paper explores the effects of variability in CNT physical parameters viz. nanotube diameter, pitch, and number of CNT in the transistor channel, on the performance of a chosen analog circuit. It is further shown that from the analyses performed, an optimal design of the CNFETs can be derived for optimizing the performance of the analog circuit as per a given specification set.
Urzhumtseva, Ludmila; Lunina, Natalia; Fokine, Andrei; Samama, Jean Pierre; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2004-09-01
The connectivity-based phasing method has been demonstrated to be capable of finding molecular packing and envelopes even for difficult cases of structure determination, as well as of identifying, in favorable cases, secondary-structure elements of protein molecules in the crystal. This method uses a single set of structure factor magnitudes and general topological features of a crystallographic image of the macromolecule under study. This information is expressed through a number of parameters. Most of these parameters are easy to estimate, and the results of phasing are practically independent of these parameters when they are chosen within reasonable limits. By contrast, the correct choice for such parameters as the expected number of connected regions in the unit cell is sometimes ambiguous. To study these dependencies, numerous tests were performed with simulated data, experimental data and mixed data sets, where several reflections missed in the experiment were completed by computed data. This paper demonstrates that the procedure is able to control this choice automatically and helps in difficult cases to identify the correct number of molecules in the asymmetric unit. In addition, the procedure behaves abnormally if the space group is defined incorrectly and therefore may distinguish between the rotation and screw axes even when high-resolution data are not available.
Automatic physical inference with information maximizing neural networks
NASA Astrophysics Data System (ADS)
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement
NASA Astrophysics Data System (ADS)
Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.
2013-09-01
Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.
Double absorbing boundaries for finite-difference time-domain electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaGrone, John, E-mail: jlagrone@smu.edu; Hagstrom, Thomas, E-mail: thagstrom@smu.edu
We describe the implementation of optimal local radiation boundary condition sequences for second order finite difference approximations to Maxwell's equations and the scalar wave equation using the double absorbing boundary formulation. Numerical experiments are presented which demonstrate that the design accuracy of the boundary conditions is achieved and, for comparable effort, exceeds that of a convolution perfectly matched layer with reasonably chosen parameters. An advantage of the proposed approach is that parameters can be chosen using an accurate a priori error bound.
NASA Astrophysics Data System (ADS)
Zhai, Guoqing; Li, Xiaofan
2015-04-01
The Bergeron-Findeisen process has been simulated using the parameterization scheme for the depositional growth of ice crystal with the temperature-dependent theoretically predicted parameters in the past decades. Recently, Westbrook and Heymsfield (2011) calculated these parameters using the laboratory data from Takahashi and Fukuta (1988) and Takahashi et al. (1991) and found significant differences between the two parameter sets. There are two schemes that parameterize the depositional growth of ice crystal: Hsie et al. (1980), Krueger et al. (1995) and Zeng et al. (2008). In this study, we conducted three pairs of sensitivity experiments using three parameterization schemes and the two parameter sets. The pre-summer torrential rainfall event is chosen as the simulated rainfall case in this study. The analysis of root-mean-squared difference and correlation coefficient between the simulation and observation of surface rain rate shows that the experiment with the Krueger scheme and the Takahashi laboratory-derived parameters produces the best rain-rate simulation. The mean simulated rain rates are higher than the mean observational rain rate. The calculations of 5-day and model domain mean rain rates reveal that the three schemes with Takahashi laboratory-derived parameters tend to reduce the mean rain rate. The Krueger scheme together with the Takahashi laboratory-derived parameters generate the closest mean rain rate to the mean observational rain rate. The decrease in the mean rain rate caused by the Takahashi laboratory-derived parameters in the experiment with the Krueger scheme is associated with the reductions in the mean net condensation and the mean hydrometeor loss. These reductions correspond to the suppressed mean infrared radiative cooling due to the enhanced cloud ice and snow in the upper troposphere.
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
A baroclinic quasigeostrophic open ocean model
NASA Technical Reports Server (NTRS)
Miller, R. N.; Robinson, A. R.; Haidvogel, D. B.
1983-01-01
A baroclinic quasigeostrophic open ocean model is presented, calibrated by a series of test problems, and demonstrated to be feasible and efficient for application to realistic mid-oceanic mesoscale eddy flow regimes. Two methods of treating the depth dependence of the flow, a finite difference method and a collocation method, are tested and intercompared. Sample Rossby wave calculations with and without advection are performed with constant stratification and two levels of nonlinearity, one weaker than and one typical of real ocean flows. Using exact analytical solutions for comparison, the accuracy and efficiency of the model is tabulated as a function of the computational parameters and stability limits set; typically, errors were controlled between 1 percent and 10 percent RMS after two wave periods. Further Rossby wave tests with realistic stratification and wave parameters chosen to mimic real ocean conditions were performed to determine computational parameters for use with real and simulated data. Finally, a prototype calculation with quasiturbulent simulated data was performed successfully, which demonstrates the practicality of the model for scientific use.
Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca
2012-01-01
The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms. PMID:22969330
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; Lu, Ming
2013-05-01
An orthogonal experiment was conducted by the means of multivariate nonlinear regression equation to adjust the influence of external transverse magnetic field and Ar flow rate on welding quality in the process of welding condenser pipe by high-speed argon tungsten-arc welding (TIG for short). The magnetic induction and flow rate of Ar gas were used as optimum variables, and tensile strength of weld was set to objective function on the base of genetic algorithm theory, and then an optimal design was conducted. According to the request of physical production, the optimum variables were restrained. The genetic algorithm in the MATLAB was used for computing. A comparison between optimum results and experiment parameters was made. The results showed that the optimum technologic parameters could be chosen by the means of genetic algorithm with the conditions of excessive optimum variables in the process of high-speed welding. And optimum technologic parameters of welding coincided with experiment results.
Xu, Zhihao; Li, Jason; Zhou, Joe X
2012-01-01
Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.
Subramanian, Swetha; Mast, T Douglas
2015-10-07
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Parameters of Models of Structural Transformations in Alloy Steel Under Welding Thermal Cycle
NASA Astrophysics Data System (ADS)
Kurkin, A. S.; Makarov, E. L.; Kurkin, A. B.; Rubtsov, D. E.; Rubtsov, M. E.
2017-05-01
A mathematical model of structural transformations in an alloy steel under the thermal cycle of multipass welding is suggested for computer implementation. The minimum necessary set of parameters for describing the transformations under heating and cooling is determined. Ferritic-pearlitic, bainitic and martensitic transformations under cooling of a steel are considered. A method for deriving the necessary temperature and time parameters of the model from the chemical composition of the steel is described. Published data are used to derive regression models of the temperature ranges and parameters of transformation kinetics in alloy steels. It is shown that the disadvantages of the active visual methods of analysis of the final phase composition of steels are responsible for inaccuracy and mismatch of published data. The hardness of a specimen, which correlates with some other mechanical properties of the material, is chosen as the most objective and reproducible criterion of the final phase composition. The models developed are checked by a comparative analysis of computational results and experimental data on the hardness of 140 alloy steels after cooling at various rates.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Quantum state engineering using one-dimensional discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Innocenti, Luca; Majury, Helena; Giordani, Taira; Spagnolo, Nicolò; Sciarrino, Fabio; Paternostro, Mauro; Ferraro, Alessandro
2017-12-01
Quantum state preparation in high-dimensional systems is an essential requirement for many quantum-technology applications. The engineering of an arbitrary quantum state is, however, typically strongly dependent on the experimental platform chosen for implementation, and a general framework is still missing. Here we show that coined quantum walks on a line, which represent a framework general enough to encompass a variety of different platforms, can be used for quantum state engineering of arbitrary superpositions of the walker's sites. We achieve this goal by identifying a set of conditions that fully characterize the reachable states in the space comprising walker and coin and providing a method to efficiently compute the corresponding set of coin parameters. We assess the feasibility of our proposal by identifying a linear optics experiment based on photonic orbital angular momentum technology.
Analysis of air quality management with emphasis on transportation sources
NASA Technical Reports Server (NTRS)
English, T. D.; Divita, E.; Lees, L.
1980-01-01
The current environment and practices of air quality management were examined for three regions: Denver, Phoenix, and the South Coast Air Basin of California. These regions were chosen because the majority of their air pollution emissions are related to mobile sources. The impact of auto exhaust on the air quality management process is characterized and assessed. An examination of the uncertainties in air pollutant measurements, emission inventories, meteorological parameters, atmospheric chemistry, and air quality simulation models is performed. The implications of these uncertainties to current air quality management practices is discussed. A set of corrective actions are recommended to reduce these uncertainties.
Code of Federal Regulations, 2013 CFR
2013-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2012 CFR
2012-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2014 CFR
2014-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Code of Federal Regulations, 2010 CFR
2010-07-01
... NEW STATIONARY SOURCES Standards of Performance for Stationary Combustion Turbines Performance Tests... of NOX emission controls in accordance with § 60.4340, the appropriate parameters must be...
Hassani, S. A.; Oemisch, M.; Balcarras, M.; Westendorff, S.; Ardid, S.; van der Meer, M. A.; Tiesinga, P.; Womelsdorf, T.
2017-01-01
Noradrenaline is believed to support cognitive flexibility through the alpha 2A noradrenergic receptor (a2A-NAR) acting in prefrontal cortex. Enhanced flexibility has been inferred from improved working memory with the a2A-NA agonist Guanfacine. But it has been unclear whether Guanfacine improves specific attention and learning mechanisms beyond working memory, and whether the drug effects can be formalized computationally to allow single subject predictions. We tested and confirmed these suggestions in a case study with a healthy nonhuman primate performing a feature-based reversal learning task evaluating performance using Bayesian and Reinforcement learning models. In an initial dose-testing phase we found a Guanfacine dose that increased performance accuracy, decreased distractibility and improved learning. In a second experimental phase using only that dose we examined the faster feature-based reversal learning with Guanfacine with single-subject computational modeling. Parameter estimation suggested that improved learning is not accounted for by varying a single reinforcement learning mechanism, but by changing the set of parameter values to higher learning rates and stronger suppression of non-chosen over chosen feature information. These findings provide an important starting point for developing nonhuman primate models to discern the synaptic mechanisms of attention and learning functions within the context of a computational neuropsychiatry framework. PMID:28091572
Vector velocity profiles of the solar wind within expanding magnetic clouds at 1 AU: Some surprises
NASA Astrophysics Data System (ADS)
Wu, C.; Lepping, R. P.; Berdichevsky, D.; Ferguson, T.; Lazarus, A. J.
2002-12-01
We investigated the average vector velocity profile of 36 carefully chosen WIND interplanetary magnetic clouds occurring over about a 7 year period since spacecraft launch, to see if a differential pattern of solar wind flow exists. Particular cases were chosen of clouds whose axes were generally within 45 degrees of the ecliptic plane and of relatively well determined characteristics obtained from cloud-parameter (cylindrically symmetric force free) fitting. This study was motivated by the desire to understand the manner in which magnetic clouds expand, a well know phenomenon revealed by most cloud speed-profiles at 1 AU. One unexpected and major result was that, even though cloud expansion was confirmed, it was primarily along the Xgse axis; i.e., neither the Ygse or Zgse velocity components reveal any noteworthy pattern. After splitting the full set of clouds into a north-passing set (spacecraft passing above the cloud, where Nn = 21) and south-passing set (Ns = 15), to study the plasma expansion of the clouds with respect to the position of the observer, it was seen that the Xgse component of velocity differs for these two sets in a rather uniform and measurable way for most of the average cloud's extent. This does not appear to be the case for the Ygse or Zgse velocity components where little measurable differences exists, and clearly no pattern, across the average cloud between the north and south positions. It is not clear why such a remarkably non-axisymmetric plasma flow pattern within the "average magnetic cloud" at 1 AU should exist. The study continues from the perspective of magnetic cloud coordinate representation. ~ ~ ~
Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.
Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M
2006-01-01
The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.
A seasonal Bartlett-Lewis Rectangular Pulse model
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Agbéko Kpogo-Nuwoklo, Komlan; Rust, Henning; Ulbrich, Uwe; Névir, Peter
2016-04-01
Precipitation time series with a high temporal resolution are needed as input for several hydrological applications, e.g. river runoff or sewer system models. As adequate observational data sets are often not available, simulated precipitation series come to use. Poisson-cluster models are commonly applied to generate these series. It has been shown that this class of stochastic precipitation models is able to well reproduce important characteristics of observed rainfall. For the gauge based case study presented here, the Bartlett-Lewis rectangular pulse model (BLRPM) has been chosen. As it has been shown that certain model parameters vary with season in a midlatitude moderate climate due to different rainfall mechanisms dominating in winter and summer, model parameters are typically estimated separately for individual seasons or individual months. Here, we suggest a simultaneous parameter estimation for the whole year under the assumption that seasonal variation of parameters can be described with harmonic functions. We use an observational precipitation series from Berlin with a high temporal resolution to exemplify the approach. We estimate BLRPM parameters with and without this seasonal extention and compare the results in terms of model performance and robustness of the estimation.
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.
Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K
2014-03-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.
The applicability and effectiveness of cluster analysis
NASA Technical Reports Server (NTRS)
Ingram, D. S.; Actkinson, A. L.
1973-01-01
An insight into the characteristics which determine the performance of a clustering algorithm is presented. In order for the techniques which are examined to accurately cluster data, two conditions must be simultaneously satisfied. First the data must have a particular structure, and second the parameters chosen for the clustering algorithm must be correct. By examining the structure of the data from the Cl flight line, it is clear that no single set of parameters can be used to accurately cluster all the different crops. The effectiveness of either a noniterative or iterative clustering algorithm to accurately cluster data representative of the Cl flight line is questionable. Thus extensive a prior knowledge is required in order to use cluster analysis in its present form for applications like assisting in the definition of field boundaries and evaluating the homogeneity of a field. New or modified techniques are necessary for clustering to be a reliable tool.
Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brambilla, N.; Prosperi, G.M.
1992-08-01
We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less
New developments in flash radiography
NASA Astrophysics Data System (ADS)
Mattsson, Arne
2007-01-01
The paper will review some of the latest developments in flash radiography. A series of multi anode tubes has been developed. These are tubes with several x-ray sources within the same vacuum enclosure. The x-ray sources are closely spaced, to come as close as possible to a single source. The x-ray sources are sequentially pulsed, at times that can be independently chosen. Tubes for voltages in the range 150 - 500 kV, with up to eight x-ray sources, will be described. Combining a multi anode tube with an intensified CCD camera, will make it possible to generate short "x-ray movies". A new flash x-ray control system has been developed. The system is operated from a PC or Laptop. All parameters of a multi channel flash x-ray system can be remotely set and monitored. The system will automatically store important operation parameters.
Generalized Ince Gaussian beams
NASA Astrophysics Data System (ADS)
Bandres, Miguel A.; Gutiérrez-Vega, Julio C.
2006-08-01
In this work we present a detailed analysis of the tree families of generalized Gaussian beams, which are the generalized Hermite, Laguerre, and Ince Gaussian beams. The generalized Gaussian beams are not the solution of a Hermitian operator at an arbitrary z plane. We derived the adjoint operator and the adjoint eigenfunctions. Each family of generalized Gaussian beams forms a complete biorthonormal set with their adjoint eigenfunctions, therefore, any paraxial field can be described as a superposition of a generalized family with the appropriate weighting and phase factors. Each family of generalized Gaussian beams includes the standard and elegant corresponding families as particular cases when the parameters of the generalized families are chosen properly. The generalized Hermite Gaussian and Laguerre Gaussian beams correspond to limiting cases of the generalized Ince Gaussian beams when the ellipticity parameter of the latter tends to infinity or to zero, respectively. The expansion formulas among the three generalized families and their Fourier transforms are also presented.
Barañao, P A; Hall, E R
2004-01-01
Activated Sludge Model No 3 (ASM3) was chosen to model an activated sludge system treating effluents from a mechanical pulp and paper mill. The high COD concentration and the high content of readily biodegradable substrates of the wastewater make this model appropriate for this system. ASM3 was calibrated based on batch respirometric tests using fresh wastewater and sludge from the treatment plant, and on analytical measurements of COD, TSS and VSS. The model, developed for municipal wastewater, was found suitable for fitting a variety of respirometric batch tests, performed at different temperatures and food to microorganism ratios (F/M). Therefore, a set of calibrated parameters, as well as the wastewater COD fractions, was estimated for this industrial wastewater. The majority of the calibrated parameters were in the range of those found in the literature.
Self-Organized Dynamic Flocking Behavior from a Simple Deterministic Map
NASA Astrophysics Data System (ADS)
Krueger, Wesley
2007-10-01
Coherent motion exhibiting large-scale order, such as flocking, swarming, and schooling behavior in animals, can arise from simple rules applied to an initial random array of self-driven particles. We present a completely deterministic dynamic map that exhibits emergent, collective, complex motion for a group of particles. Each individual particle is driven with a constant speed in two dimensions adopting the average direction of a fixed set of non-spatially related partners. In addition, the particle changes direction by π as it reaches a circular boundary. The dynamical patterns arising from these rules range from simple circular-type convective motion to highly sophisticated, complex, collective behavior which can be easily interpreted as flocking, schooling, or swarming depending on the chosen parameters. We present the results as a series of short movies and we also explore possible order parameters and correlation functions capable of quantifying the resulting coherence.
Isele-Holder, Rolf E; Mitchell, Wayne; Ismail, Ahmed E
2012-11-07
For inhomogeneous systems with interfaces, the inclusion of long-range dispersion interactions is necessary to achieve consistency between molecular simulation calculations and experimental results. For accurate and efficient incorporation of these contributions, we have implemented a particle-particle particle-mesh Ewald solver for dispersion (r(-6)) interactions into the LAMMPS molecular dynamics package. We demonstrate that the solver's O(N log N) scaling behavior allows its application to large-scale simulations. We carefully determine a set of parameters for the solver that provides accurate results and efficient computation. We perform a series of simulations with Lennard-Jones particles, SPC/E water, and hexane to show that with our choice of parameters the dependence of physical results on the chosen cutoff radius is removed. Physical results and computation time of these simulations are compared to results obtained using either a plain cutoff or a traditional Ewald sum for dispersion.
NASA Astrophysics Data System (ADS)
Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.
2015-07-01
The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.
NASA Astrophysics Data System (ADS)
Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2016-03-01
This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.
Acoustic Analysis of PD Speech
Chenausky, Karen; MacAuslan, Joel; Goldhor, Richard
2011-01-01
According to the U.S. National Institutes of Health, approximately 500,000 Americans have Parkinson's disease (PD), with roughly another 50,000 receiving new diagnoses each year. 70%–90% of these people also have the hypokinetic dysarthria associated with PD. Deep brain stimulation (DBS) substantially relieves motor symptoms in advanced-stage patients for whom medication produces disabling dyskinesias. This study investigated speech changes as a result of DBS settings chosen to maximize motor performance. The speech of 10 PD patients and 12 normal controls was analyzed for syllable rate and variability, syllable length patterning, vowel fraction, voice-onset time variability, and spirantization. These were normalized by the controls' standard deviation to represent distance from normal and combined into a composite measure. Results show that DBS settings relieving motor symptoms can improve speech, making it up to three standard deviations closer to normal. However, the clinically motivated settings evaluated here show greater capacity to impair, rather than improve, speech. A feedback device developed from these findings could be useful to clinicians adjusting DBS parameters, as a means for ensuring they do not unwittingly choose DBS settings which impair patients' communication. PMID:21977333
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakariaee, R; Brown, C J; Hamarneh, G
2014-08-15
Dosimetric parameters based on dose-volume histograms (DVH) of contoured structures are routinely used to evaluate dose delivered to target structures and organs at risk. However, the DVH provides no information on the spatial distribution of the dose in situations of repeated fractions with changes in organ shape or size. The aim of this research was to develop methods to more accurately determine geometrically localized, cumulative dose to the bladder wall in intracavitary brachytherapy for cervical cancer. The CT scans and treatment plans of 20 cervical cancer patients were used. Each patient was treated with five high-dose-rate (HDR) brachytherapy fractions ofmore » 600cGy prescribed dose. The bladder inner and outer surfaces were delineated using MIM Maestro software (MIM Software Inc.) and were imported into MATLAB (MathWorks) as 3-dimensional point clouds constituting the “bladder wall”. A point-set registration toolbox for MATLAB, Coherent Point Drift (CPD), was used to non-rigidly transform the bladder-wall points from four of the fractions to the coordinate system of the remaining (reference) fraction, which was chosen to be the emptiest bladder for each patient. The doses were accumulated on the reference fraction and new cumulative dosimetric parameters were calculated. The LENT-SOMA toxicity scores of these patients were studied against the cumulative dose parameters. Based on this study, there was no significant correlation between the toxicity scores and the determined cumulative dose parameters.« less
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
Can we (control) Engineer the degree learning process?
NASA Astrophysics Data System (ADS)
White, A. S.; Censlive, M.; Neilsen, D.
2014-07-01
This paper investigates how control theory could be applied to learning processes in engineering education. The initial point for the analysis is White's Double Loop learning model of human automation control modified for the education process where a set of governing principals is chosen, probably by the course designer. After initial training the student decides unknowingly on a mental map or model. After observing how the real world is behaving, a strategy to achieve the governing variables is chosen and a set of actions chosen. This may not be a conscious operation, it maybe completely instinctive. These actions will cause some consequences but not until a certain time delay. The current model is compared with the work of Hollenbeck on goal setting, Nelson's model of self-regulation and that of Abdulwahed, Nagy and Blanchard at Loughborough who investigated control methods applied to the learning process.
An experimental study of graph connectivity for unsupervised word sense disambiguation.
Navigli, Roberto; Lapata, Mirella
2010-04-01
Word sense disambiguation (WSD), the task of identifying the intended meanings (senses) of words in context, has been a long-standing research objective for natural language processing. In this paper, we are concerned with graph-based algorithms for large-scale WSD. Under this framework, finding the right sense for a given word amounts to identifying the most "important" node among the set of graph nodes representing its senses. We introduce a graph-based WSD algorithm which has few parameters and does not require sense-annotated data for training. Using this algorithm, we investigate several measures of graph connectivity with the aim of identifying those best suited for WSD. We also examine how the chosen lexicon and its connectivity influences WSD performance. We report results on standard data sets and show that our graph-based approach performs comparably to the state of the art.
a Model for Brand Competition Within a Social Network
NASA Astrophysics Data System (ADS)
Huerta-Quintanilla, R.; Canto-Lugo, E.; Rodríguez-Achach, M.
An agent-based model was built representing an economic environment in which m brands are competing for a product market. These agents represent companies that interact within a social network in which a certain agent persuades others to update or shift their brands; the brands of the products they are using. Decision rules were established that caused each agent to react according to the economic benefits it would receive; they updated/shifted only if it was beneficial. Each agent can have only one of the m possible brands, and she can interact with its two nearest neighbors and another set of agents which are chosen according to a particular set of rules in the network topology. An absorbing state was always reached in which a single brand monopolized the network (known as condensation). The condensation time varied as a function of model parameters is studied including an analysis of brand competition using different networks.
NASA Astrophysics Data System (ADS)
Chen, X.; Huang, G.
2017-12-01
In recent years, distributed hydrological models have been widely used in storm water management, water resources protection and so on. Therefore, how to evaluate the uncertainty of the model reasonably and efficiently becomes a hot topic today. In this paper, the soil and water assessment tool (SWAT) model is constructed for the study area of China's Feilaixia watershed, and the uncertainty of the runoff simulation is analyzed by GLUE method deeply. Taking the initial parameter range of GLUE method as the research core, the influence of different initial parameter ranges on model uncertainty is studied. In this paper, two sets of parameter ranges are chosen as the object of study, the first one (range 1) is recommended by SWAT-CUP and the second one (range 2) is calibrated by SUFI-2. The results showed that under the same number of simulations (10,000 times), the overall uncertainty obtained by the range 2 is less than the range 1. Specifically, the "behavioral" parameter sets for the range 2 is 10000 and for the range 1 is 4448. In the calibration and the validation, the ratio of P-factor to R-factor for range 1 is 1.387 and 1.391, and for range 2 is 1.405 and 1.462 respectively. In addition, the simulation result of range 2 is better with the NS and R2 slightly higher than range 1. Therefore, it can be concluded that using the parameter range calibrated by SUFI-2 as the initial parameter range for the GLUE is a way to effectively capture and evaluate the simulation uncertainty.
NASA Technical Reports Server (NTRS)
Townes, C. H.
1979-01-01
Searches for extraterrestrial intelligence concentrate on attempts to receive signals in the microwave region, the argument being given that communication occurs there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered, in particular photon detection rather than linear detection alone, and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone can be chosen for optimization by an extraterrestrial civilization.
Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Results
NASA Astrophysics Data System (ADS)
Vadinský, Ondřej
2018-03-01
This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q0, Qλ, and HLQλ corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.
Estimation of the transmission dynamics of African swine fever virus within a swine house.
Nielsen, J P; Larsen, T S; Halasa, T; Christiansen, L E
2017-10-01
The spread of African swine fever virus (ASFV) threatens to reach further parts of Europe. In countries with a large swine production, an outbreak of ASF may result in devastating economic consequences for the swine industry. Simulation models can assist decision makers setting up contingency plans. This creates a need for estimation of parameters. This study presents a new analysis of a previously published study. A full likelihood framework is presented including the impact of model assumptions on the estimated transmission parameters. As animals were only tested every other day, an interpretation was introduced to cover the weighted infectiousness on unobserved days for the individual animals (WIU). Based on our model and the set of assumptions, the within- and between-pen transmission parameters were estimated to β w = 1·05 (95% CI 0·62-1·72), β b = 0·46 (95% CI 0·17-1·00), respectively, and the WIU = 1·00 (95% CI 0-1). Furthermore, we simulated the spread of ASFV within a pig house using a modified SEIR-model to establish the time from infection of one animal until ASFV is detected in the herd. Based on a chosen detection limit of 2·55% equivalent to 10 dead pigs out of 360, the disease would be detected 13-19 days after introduction.
Double dissociation of value computations in orbitofrontal and anterior cingulate neurons
Kennerley, Steven W.; Behrens, Timothy E. J.; Wallis, Jonathan D.
2011-01-01
Damage to prefrontal cortex (PFC) impairs decision-making, but the underlying value computations that might cause such impairments remain unclear. Here we report that value computations are doubly dissociable within PFC neurons. While many PFC neurons encoded chosen value, they used opponent encoding schemes such that averaging the neuronal population eliminated value coding. However, a special population of neurons in anterior cingulate cortex (ACC) - but not orbitofrontal cortex (OFC) - multiplex chosen value across decision parameters using a unified encoding scheme, and encoded reward prediction errors. In contrast, neurons in OFC - but not ACC - encoded chosen value relative to the recent history of choice values. Together, these results suggest complementary valuation processes across PFC areas: OFC neurons dynamically evaluate current choices relative to recent choice values, while ACC neurons encode choice predictions and prediction errors using a common valuation currency reflecting the integration of multiple decision parameters. PMID:22037498
Impact of experimental design on PET radiomics in predicting somatic mutation status.
Yip, Stephen S F; Parmar, Chintan; Kim, John; Huynh, Elizabeth; Mak, Raymond H; Aerts, Hugo J W L
2017-12-01
PET-based radiomic features have demonstrated great promises in predicting genetic data. However, various experimental parameters can influence the feature extraction pipeline, and hence, Here, we investigated how experimental settings affect the performance of radiomic features in predicting somatic mutation status in non-small cell lung cancer (NSCLC) patients. 348 NSCLC patients with somatic mutation testing and diagnostic PET images were included in our analysis. Radiomic feature extractions were analyzed for varying voxel sizes, filters and bin widths. 66 radiomic features were evaluated. The performance of features in predicting mutations status was assessed using the area under the receiver-operating-characteristic curve (AUC). The influence of experimental parameters on feature predictability was quantified as the relative difference between the minimum and maximum AUC (δ). The large majority of features (n=56, 85%) were significantly predictive for EGFR mutation status (AUC≥0.61). 29 radiomic features significantly predicted EGFR mutations and were robust to experimental settings with δ Overall <5%. The overall influence (δ Overall ) of the voxel size, filter and bin width for all features ranged from 5% to 15%, respectively. For all features, none of the experimental designs was predictive of KRAS+ from KRAS- (AUC≤0.56). The predictability of 29 radiomic features was robust to the choice of experimental settings; however, these settings need to be carefully chosen for all other features. The combined effect of the investigated processing methods could be substantial and must be considered. Optimized settings that will maximize the predictive performance of individual radiomic features should be investigated in the future. Copyright © 2017 Elsevier B.V. All rights reserved.
Nuttens, V E; Nahum, A E; Lucas, S
2011-01-01
Urethral NTCP has been determined for three prostates implanted with seeds based on (125)I (145 Gy), (103)Pd (125 Gy), (131)Cs (115 Gy), (103)Pd-(125)I (145 Gy), or (103)Pd-(131)Cs (115 Gy or 130 Gy). First, DU(20), meaning that 20% of the urhral volume receive a dose of at least DU(20), is converted into an I-125 LDR equivalent DU(20) in order to use the urethral NTCP model. Second, the propagation of uncertainties through the steps in the NTCP calculation was assessed in order to identify the parameters responsible for large data uncertainties. Two sets of radiobiological parameters were studied. The NTCP results all fall in the 19%-23% range and are associated with large uncertainties, making the comparison difficult. Depending on the dataset chosen, the ranking of NTCP values among the six seed implants studied changes. Moreover, the large uncertainties on the fitting parameters of the urethral NTCP model result in large uncertainty on the NTCP value. In conclusion, the use of NTCP model for permanent brachytherapy is feasible but it is essential that the uncertainties on the parameters in the model be reduced.
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-01
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis. PMID:26805819
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-21
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.
NASA Astrophysics Data System (ADS)
Subbulakshmi, N.; Kumar, M. Saravana; Sheela, K. Juliet; Krishnan, S. Radha; Shanmugam, V. M.; Subramanian, P.
2017-12-01
Electron Paramagnetic Resonance (EPR) spectroscopic studies of VO2+ ions as paramagnetic impurity in Lithium Sodium Acid Phthalate (LiNaP) single crystal have been done at room temperature on X-Band microwave frequency. The lattice parameter values are obtained for the chosen system from Single crystal X-ray diffraction study. Among the number of hyperfine lines in the EPR spectra only two sets are reported from EPR data. The principal values of g and A tensors are evaluated for the two different VO2+ sites I and II. They possess the crystalline field around the VO2+ as orthorhombic. Site II VO2+ ion is identified as substitutional in place of Na1 location and the other site I is identified as interstitial location. For both sites in LiNaP, VO2+ are identified in octahedral coordination with tetragonal distortion as seen from the spin Hamiltonian parameter values. The ground state of vanadyl ion in the LiNaP single crystal is dxy. Using optical absorption data the octahedral and tetragonal parameters are calculated. By correlating EPR and optical data, the molecular orbital bonding parameters have been discussed for both sites.
Exploring the effects of acid mine drainage on diatom teratology using geometric morphometry.
Olenici, Adriana; Blanco, Saúl; Borrego-Ramos, María; Momeu, Laura; Baciu, Călin
2017-10-01
Metal pollution of aquatic habitats is a major and persistent environmental problem. Acid mine drainage (AMD) affects lotic systems in numerous and interactive ways. In the present work, a mining area (Roșia Montană) was chosen as study site, and we focused on two aims: (i) to find the set of environmental predictors leading to the appearance of the abnormal diatom individuals in the study area and (ii) to assess the relationship between the degree of valve outline deformation and AMD-derived pollution. In this context, morphological differences between populations of Achnanthidium minutissimum and A. macrocephalum, including normal and abnormal individuals, were evidenced by means of valve shape analysis. Geometric morphometry managed to capture and discriminate normal and abnormal individuals. Multivariate analyses (NMDS, PLS) separated the four populations of the two species mentioned and revealed the main physico-chemical parameters that influenced valve deformation in this context, namely conductivity, Zn, and Cu. ANOSIM test evidenced the presence of statistically significant differences between normal and abnormal individuals within both chosen Achnanthidium taxa. In order to determine the relative contribution of each of the measured physico-chemical parameters in the observed valve outline deformations, a PLS was conducted, confirming the results of the NMDS. The presence of deformed individuals in the study area can be attributed to the fact that the diatom communities were strongly affected by AMD released from old mining works and waste rock deposits.
Smartphone-Based System for Learning and Inferring Hearing Aid Settings.
Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J
2016-10-01
Previous research has shown that hearing aid wearers can successfully self-train their instruments' gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the "untrained system," that is, the manufacturer's algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The "trained system" first learned each individual's preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. An experimental within-participants study. Participants used a prototype hearing system-comprising two hearing aids, Android smartphone, and body-worn gateway device-for ∼6 weeks. Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Participants were fitted and instructed to perform daily comparisons of settings ("listening evaluations") through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone-including environmental sound classification, sound level, and location-to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system ("trained settings") to those suggested by the hearing aids' untrained system ("untrained settings"). We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. American Academy of Audiology
NASA Astrophysics Data System (ADS)
Goyal, M.; Goyal, R.; Bhargava, R.
2017-12-01
In this paper, triple diffusive natural convection under Darcy flow over an inclined plate embedded in a porous medium saturated with a binary base fluid containing nanoparticles and two salts is studied. The model used for the nanofluid is the one which incorporates the effects of Brownian motion and thermophoresis. In addition, the thermal energy equations include regular diffusion and cross-diffusion terms. The vertical surface has the heat, mass and nanoparticle fluxes each prescribed as a power law function of the distance along the wall. The boundary layer equations are transformed into a set of ordinary differential equations with the help of group theory transformations. A wide range of parameter values are chosen to bring out the effect of buoyancy ratio, regular Lewis number and modified Dufour parameters of both salts and nanofluid parameters with varying angle of inclinations. The effects of parameters on the velocity, temperature, solutal and nanoparticles volume fraction profiles, as well as on the important parameters of heat and mass transfer, i.e., the reduced Nusselt, regular and nanofluid Sherwood numbers, are discussed. Such problems find application in extrusion of metals, polymers and ceramics, production of plastic films, insulation of wires and liquid packaging.
NASA Astrophysics Data System (ADS)
Mejid Elsiti, Nagwa; Noordin, M. Y.; Idris, Ani; Saed Majeed, Faraj
2017-10-01
This paper presents an optimization of process parameters of Micro-Electrical Discharge Machining (EDM) process with (γ-Fe2O3) nano-powder mixed dielectric using multi-response optimization Grey Relational Analysis (GRA) method instead of single response optimization. These parameters were optimized based on 2-Level factorial design combined with Grey Relational Analysis. The machining parameters such as peak current, gap voltage, and pulse on time were chosen for experimentation. The performance characteristics chosen for this study are material removal rate (MRR), tool wear rate (TWR), Taper and Overcut. Experiments were conducted using electrolyte copper as the tool and CoCrMo as the workpiece. Experimental results have been improved through this approach.
A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range
Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.
2017-01-01
Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Method for leveling the power output of an electromechanical battery as a function of speed
Post, R.F.
1999-03-16
The invention is a method of leveling the power output of an electromechanical battery during its discharge, while at the same time maximizing its power output into a given load. The method employs the concept of series resonance, employing a capacitor the parameters of which are chosen optimally to achieve the desired near-flatness of power output over any chosen charged-discharged speed ratio. Capacitors are inserted in series with each phase of the windings to introduce capacitative reactances that act to compensate the inductive reactance of these windings. This compensating effect both increases the power that can be drawn from the generator before inductive voltage drops in the windings become dominant and acts to flatten the power output over a chosen speed range. The values of the capacitors are chosen so as to optimally flatten the output of the generator over the chosen speed range. 3 figs.
Method for leveling the power output of an electromechanical battery as a function of speed
Post, Richard F.
1999-01-01
The invention is a method of leveling the power output of an electromechanical battery during its discharge, while at the same time maximizing its power output into a given load. The method employs the concept of series resonance, employing a capacitor the parameters of which are chosen optimally to achieve the desired near-flatness of power output over any chosen charged-discharged speed ratio. Capacitors are inserted in series with each phase of the windings to introduce capacitative reactances that act to compensate the inductive reactance of these windings. This compensating effect both increases the power that can be drawn from the generator before inductive voltage drops in the windings become dominant and acts to flatten the power output over a chosen speed range. The values of the capacitors are chosen so as to optimally flatten the output of the generator over the chosen speed range.
Smartphone-Based System for Learning and Inferring Hearing Aid Settings
Aldaz, Gabriel; Puria, Sunil; Leifer, Larry J.
2017-01-01
Background Previous research has shown that hearing aid wearers can successfully self-train their instruments’ gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the “untrained system,” that is, the manufacturer’s algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The “trained system” first learned each individual’s preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time). Purpose To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings. Research Design An experimental within-participants study. Participants used a prototype hearing system—comprising two hearing aids, Android smartphone, and body-worn gateway device—for ~6 weeks. Study Sample Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones. Intervention Participants were fitted and instructed to perform daily comparisons of settings (“listening evaluations”) through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone—including environmental sound classification, sound level, and location—to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system (“trained settings”) to those suggested by the hearing aids’ untrained system (“untrained settings”). Data Collection and Analysis We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information. Results Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC. Conclusions The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone. PMID:27718350
For numerical differentiation, dimensionality can be a blessing!
NASA Astrophysics Data System (ADS)
Anderssen, Robert S.; Hegland, Markus
Finite difference methods, such as the mid-point rule, have been applied successfully to the numerical solution of ordinary and partial differential equations. If such formulas are applied to observational data, in order to determine derivatives, the results can be disastrous. The reason for this is that measurement errors, and even rounding errors in computer approximations, are strongly amplified in the differentiation process, especially if small step-sizes are chosen and higher derivatives are required. A number of authors have examined the use of various forms of averaging which allows the stable computation of low order derivatives from observational data. The size of the averaging set acts like a regularization parameter and has to be chosen as a function of the grid size h. In this paper, it is initially shown how first (and higher) order single-variate numerical differentiation of higher dimensional observational data can be stabilized with a reduced loss of accuracy than occurs for the corresponding differentiation of one-dimensional data. The result is then extended to the multivariate differentiation of higher dimensional data. The nature of the trade-off between convergence and stability is explicitly characterized, and the complexity of various implementations is examined.
NASA Technical Reports Server (NTRS)
Merka, J.; Szabo, A.; Narock, T. W.; King, J. H.; Paularena, K. I.; Richardson, J. D.
2003-01-01
The MIT portion of this project was to use the plasma data from IMP 8 to identify bow shock crossings for construction of a bow shock data base. In collaboration with Goddard, we determined which shock parameters would be included in the catalog and developed a set of flags for characterizing the data. IMP 8 data from 1973-2001 were surveyed for bow shock crossings; the crossings apparent in the plasma data were compared to a list of crossing chosen in the magnetometer data by Goddard. Differences were reconciled to produce a single list. The data were then provided to the NSSDC for archiving. All the work ascribed to MIT in the proposal was completed.
Effect of microstructure on the elasto-viscoplastic deformation of dual phase titanium structures
NASA Astrophysics Data System (ADS)
Ozturk, Tugce; Rollett, Anthony D.
2018-02-01
The present study is devoted to the creation of a process-structure-property database for dual phase titanium alloys, through a synthetic microstructure generation method and a mesh-free fast Fourier transform based micromechanical model that operates on a discretized image of the microstructure. A sensitivity analysis is performed as a precursor to determine the statistically representative volume element size for creating 3D synthetic microstructures based on additively manufactured Ti-6Al-4V characteristics, which are further modified to expand the database for features of interest, e.g., lath thickness. Sets of titanium hardening parameters are extracted from literature, and The relative effect of the chosen microstructural features is quantified through comparisons of average and local field distributions.
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1980-01-01
A method for generating two dimensional finite difference grids about airfoils and other shapes by the use of the Poisson differential equation is developed. The inhomogeneous terms are automatically chosen such that two important effects are imposed on the grid at both the inner and outer boundaries. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. A FORTRAN computer program has been written to use this method. A description of the program, a discussion of the control parameters, and a set of sample cases are included.
Modeling of Internet Influence on Group Emotion
NASA Astrophysics Data System (ADS)
Czaplicka, Agnieszka; Hołyst, Janusz A.
Long-range interactions are introduced to a two-dimensional model of agents with time-dependent internal variables ei = 0, ±1 corresponding to valencies of agent emotions. Effects of spontaneous emotion emergence and emotional relaxation processes are taken into account. The valence of agent i depends on valencies of its four nearest neighbors but it is also influenced by long-range interactions corresponding to social relations developed for example by Internet contacts to a randomly chosen community. Two types of such interactions are considered. In the first model the community emotional influence depends only on the sign of its temporary emotion. When the coupling parameter approaches a critical value a phase transition takes place and as result for larger coupling constants the mean group emotion of all agents is nonzero over long time periods. In the second model the community influence is proportional to magnitude of community average emotion. The ordered emotional phase was here observed for a narrow set of system parameters.
Vitrac, Olivier; Challe, Blandine; Leblanc, Jean-Charles; Feigenbaum, Alexandre
2007-01-01
The contamination risk in 12 packaged foods by substances released from the plastic contact layer has been evaluated using a novel modeling technique, which predicts the migration that accounts for (i) possible variations in the time of contact between foodstuffs and packaging and (ii) uncertainty in physico-chemical parameters used to predict migration. Contamination data, which are subject to variability and uncertainty, are derived through a stochastic resolution of transport equations, which control the migration into food. Distributions of contact times between packaging materials and foodstuffs were reconstructed from the volumes and frequencies of purchases of a given panel of 6422 households, making assumptions about household storage behaviour. The risk of contamination of the packaged foods was estimated for styrene (a monomer found in polystyrene yogurt pots) and 2,6-di-tert-butyl-4-hydroxytoluene (a representative of the widely used phenolic antioxidants). The results are analysed and discussed regarding sensitivity of the model to the set parameters and chosen assumptions.
Fabrication of Titania Nanotubes for Gas Sensing Applications
NASA Astrophysics Data System (ADS)
Dzilal, A. A.; Muti, M. N.; John, O. D.
2010-03-01
Detection of hydrogen is needed for industrial process control and medical applications where presence of hydrogen indicates different type of health problems. Titanium dioxide nanotube structure is chosen as an active component in the gas sensor because of its highly sensitive electrical resistance to hydrogen over a wide range of concentrations. The objective of the work is to fabricate good quality titania nanotubes suitable for hydrogen sensing applications. The fabrication method used is anodizing method. The anodizing parameters namely the voltage, time duration, concentration of hydrofluoric acid in water, separation between the electrodes and the ambient temperature are varied accordingly to find the optimum anodizing conditions for production of good quality titania nanotubes. The highly ordered porous titania nanotubes produced by this method are in tabular shape and have good uniformity and alignment over large areas. From the investigation done, certain set of anodizing parameters have been found to produce good quality titania nanotubes with diameter ranges from 47 nm to 94 nm.
NASA Technical Reports Server (NTRS)
Bozyan, Elizabeth P.; Hemenway, Paul D.; Argue, A. Noel
1990-01-01
Observations of a set of 89 extragalactic objects (EGOs) will be made with the Hubble Space Telescope Fine Guidance Sensors and Planetary Camera in order to link the HIPPARCOS Instrumental System to an extragalactic coordinate system. Most of the sources chosen for observation contain compact radio sources and stellarlike nuclei; 65 percent are optical variables beyond a 0.2 mag limit. To ensure proper exposure times, accurate mean magnitudes are necessary. In many cases, the average magnitudes listed in the literature were not adequate. The literature was searched for all relevant photometric information for the EGOs, and photometric parameters were derived, including mean magnitude, maximum range, and timescale of variability. This paper presents the results of that search and the parameters derived. The results will allow exposure times to be estimated such that an observed magnitude different from the tabular magnitude by 0.5 mag in either direction will not degrade the astrometric centering ability on a Planetary Camera CCD frame.
Consistent van der Waals Radii for the Whole Main Group
Mantina, Manjeera; Chamberlin, Adam C.; Valero, Rosendo; Cramer, Christopher J.; Truhlar, Donald G.
2013-01-01
Atomic radii are not precisely defined but are nevertheless widely used parameters in modeling and understanding molecular structure and interactions. The van der Waals radii determined by Bondi from molecular crystals and noble gas crystals are the most widely used values, but Bondi recommended radius values for only 28 of the 44 main-group elements in the periodic table. In the present article we present atomic radii for the other 16; these new radii were determined in a way designed to be compatible with Bondi’s scale. The method chosen is a set of two-parameter correlations of Bondi’s radii with repulsive-wall distances calculated by relativistic coupled-cluster electronic structure calculations. The newly determined radii (in Å) are Be, 1.53; B, 1.92; Al, 1.84; Ca, 2.31; Ge, 2.11; Rb, 3.03; Sr, 2.50; Sb, 2.06; Cs, 3.43; Ba, 2.68; Bi, 2.07; Po, 1.97; At, 2.02; Rn, 2.20; Fr, 3.48; and Ra, 2.83. PMID:19382751
Consistent van der Waals radii for the whole main group.
Mantina, Manjeera; Chamberlin, Adam C; Valero, Rosendo; Cramer, Christopher J; Truhlar, Donald G
2009-05-14
Atomic radii are not precisely defined but are nevertheless widely used parameters in modeling and understanding molecular structure and interactions. The van der Waals radii determined by Bondi from molecular crystals and data for gases are the most widely used values, but Bondi recommended radius values for only 28 of the 44 main-group elements in the periodic table. In the present Article, we present atomic radii for the other 16; these new radii were determined in a way designed to be compatible with Bondi's scale. The method chosen is a set of two-parameter correlations of Bondi's radii with repulsive-wall distances calculated by relativistic coupled-cluster electronic structure calculations. The newly determined radii (in A) are Be, 1.53; B, 1.92; Al, 1.84; Ca, 2.31; Ge, 2.11; Rb, 3.03; Sr, 2.49; Sb, 2.06; Cs, 3.43; Ba, 2.68; Bi, 2.07; Po, 1.97; At, 2.02; Rn, 2.20; Fr, 3.48; and Ra, 2.83.
A new approach for the quantitative evaluation of drawings in children with learning disabilities.
Galli, Manuela; Vimercati, Sara Laura; Stella, Giacomo; Caiazzo, Giorgia; Norveti, Federica; Onnis, Francesca; Rigoldi, Chiara; Albertini, Giorgio
2011-01-01
A new method for a quantitative and objective description of drawing and for the quantification of drawing ability in children with learning disabilities (LD) is hereby presented. Twenty-four normally developing children (N) (age 10.6 ± 0.5) and 18 children with learning disabilities (LD) (age 10.3 ± 2.4) took part to the study. The drawing tasks were chosen among those already used in clinical daily experience (Denver Developmental Screening Test). Some parameters were defined in order to quantitatively describe the features of the children's drawings, introducing new objective measurements beside the subjective standard clinical evaluation. The experimental set-up revealed to be valid for clinical application with LD children. The parameters highlighted the presence of differences in the drawing features of N and LD children. This paper suggests the applicability of this protocol to other fields of motor and cognitive valuation, as well as the possibility to study the upper limbs position and muscle activation during drawing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
Probabilistic images (PBIS): A concise image representation technique for multiple parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, L.C.; Yeh, S.H.; Chen, Z.
1984-01-01
Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalitiesmore » is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method.« less
Estimation of line dimensions in 3D direct laser writing lithography
NASA Astrophysics Data System (ADS)
Guney, M. G.; Fedder, G. K.
2016-10-01
Two photon polymerization (TPP) based 3D direct laser writing (3D-DLW) finds application in a wide range of research areas ranging from photonic and mechanical metamaterials to micro-devices. Most common structures are either single lines or formed by a set of interconnected lines as in the case of crystals. In order to increase the fidelity of these structures and reach the ultimate resolution, the laser power and scan speed used in the writing process should be chosen carefully. However, the optimization of these writing parameters is an iterative and time consuming process in the absence of a model for the estimation of line dimensions. To this end, we report a semi-empirical analytic model through simulations and fitting, and demonstrate that it can be used for estimating the line dimensions mostly within one standard deviation of the average values over a wide range of laser power and scan speed combinations. The model delimits the trend in onset of micro-explosions in the photoresist due to over-exposure and of low degree of conversion due to under-exposure. The model guides setting of high-fidelity and robust writing parameters of a photonic crystal structure without iteration and in close agreement with the estimated line dimensions. The proposed methodology is generalizable by adapting the model coefficients to any 3D-DLW setup and corresponding photoresist as a means to estimate the line dimensions for tuning the writing parameters.
Luxton, Gary; Keall, Paul J; King, Christopher R
2008-01-07
To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.
Experimental equipment for measuring of rotary air motors parameters
NASA Astrophysics Data System (ADS)
Dvořák, Lukáš; Fojtášek, Kamil; Řeháček, Vojtěch
In the article the construction of an experimental device for measuring the parameters of small rotary air motors is described. Further a measurement methodology and measured data processing are described. At the end of the article characteristics of the chosen air motor are presented.
Optimization of Selective Laser Melting by Evaluation Method of Multiple Quality Characteristics
NASA Astrophysics Data System (ADS)
Khaimovich, A. I.; Stepanenko, I. S.; Smelov, V. G.
2018-01-01
Article describes the adoption of the Taguchi method in selective laser melting process of sector of combustion chamber by numerical and natural experiments for achieving minimum temperature deformation. The aim was to produce a quality part with minimum amount of numeric experiments. For the study, the following optimization parameters (independent factors) were chosen: the laser beam power and velocity; two factors for compensating the effect of the residual thermal stresses: the scale factor of the preliminary correction of the part geometry and the number of additional reinforcing elements. We used an orthogonal plan of 9 experiments with a factor variation at three levels (L9). As quality criterias, the values of distortions for 9 zones of the combustion chamber and the maximum strength of the material of the chamber were chosen. Since the quality parameters are multidirectional, a grey relational analysis was used to solve the optimization problem for multiple quality parameters. As a result, according to the parameters obtained, the combustion chamber segments of the gas turbine engine were manufactured.
NASA Astrophysics Data System (ADS)
POP, A. B.; ȚÎȚU, M. A.
2016-11-01
In the metal cutting process, surface quality is intrinsically related to the cutting parameters and to the cutting tool geometry. At the same time, metal cutting processes are closely related to the machining costs. The purpose of this paper is to reduce manufacturing costs and processing time. A study was made, based on the mathematical modelling of the average of the absolute value deviation (Ra) resulting from the end milling process on 7136 aluminium alloy, depending on cutting process parameters. The novel element brought by this paper is the 7136 aluminium alloy type, chosen to conduct the experiments, which is a material developed and patented by Universal Alloy Corporation. This aluminium alloy is used in the aircraft industry to make parts from extruded profiles, and it has not been studied for the proposed research direction. Based on this research, a mathematical model of surface roughness Ra was established according to the cutting parameters studied in a set experimental field. A regression analysis was performed, which identified the quantitative relationships between cutting parameters and the surface roughness. Using the variance analysis ANOVA, the degree of confidence for the achieved results by the regression equation was determined, and the suitability of this equation at every point of the experimental field.
Adapting Coastal State Indicators to end-users: the iCoast Project
NASA Astrophysics Data System (ADS)
Demarchi, Alessandro; Isotta Cristofori, Elena; Gracia, Vicente; Sairouní, Abdel; García-León, Manuel; Cámaro, Walther; Facello, Anna
2016-04-01
The extraordinary development of the built environment and of the population densities in the coastal areas are making coastal communities highly exposed. The sea level rise induced by climate change will worsen this coastal vulnerability scenario and a considerable amount of people are expected to be threatened by coastal flooding in the future. Due to the increasing number of catastrophic events, and the consequent increased number of damages and people affected, over the last decades coastal hazard management has become a fundamental activity in order to improve the resilience of coastal community. In this scenario, iCoast (integrated COastal Alert SysTem) project has been founded to develop a tool able to address coastal risks caused by extreme waves and high sea water levels in European coastal areas. In the framework of iCoast Project, a set of Coastal State Indicators (CSIs) has been developed in order to improve the forecasting and the assessment of coastal risks. CSIs are indeed parameters able to provide end-users with an essential information about coastal hazards and related impacts. Within the iCoast Project, following a comprehensive literature review about existing indicators concerning coastal risks, a list of CSIs have been chosen as parameters that can be derived from the meteorological and the hydrodynamic modules. They include both physical variables used as trigger for meteorological and flood warnings from the majority of the operational National/Regional warning systems and further essential parameters, so called 'storm integrated' coastal-storm indicators, able to describe the physical processes that drive coastal damages, such as erosion, accumulation, flooding, destructions. Nowadays, it is generally acknowledged that communities are not homogenous and hence their different vulnerable groups might need different warnings. Generally, even existing national EWS in developed countries are often ineffective to issue targeted warnings for specific user groups because they generate warnings whenever strong winds or high waves are expected. Once aggregated, weighted and compared with established thresholds, CSIs whereas allow to produce alert messages that can be tailored to different end-users needs. In the present study, the set of CSIs chosen in the framework of the iCoast Project, along with their performances tested for the case study of the Spanish NW Mediterranean Coast (i.e. Catalan Coast), is presented.
Dorado, A D; Lafuente, F J; Gabriel, D; Gamisans, X
2010-02-01
In the present work, 10 packing materials commonly used as support media in biofiltration are analysed and compared to evaluate their suitability according to physical characteristics. The nature of the packing material in biofilters is an important factor for the success in their construction and operation. Different packing materials have been used in biofiltration without a global agreement about which ones are the most adequate for biofiltration success. The materials studied were chosen according to previous works in the field of biofiltration including both organic and inorganic (or synthetic) materials. A set of nine different parameters were selected to cope with well-established factors, such as a material-specific surface area, pressure drop, nutrient supply, water retentivity, sorption capacity, and purchase cost. One ranking of packing materials was established for each parameter studied in order to define a relative suitability degree. Since biofiltration success generally depends on a combination of the ranked parameters, a procedure was defined to compare packing materials suitability under common situations in biofiltration. The selected scenarios, such as biofiltration of intermittent loads of pollutants and biofiltration of waste gases with low relative humidity, were investigated. The results indicate that, out of the packing materials studied, activated carbons were ranked top of several parameter rankings and were shown to be a significantly better packing material when parameters were combined to assess such selected scenarios.
Material and shape optimization for multi-layered vocal fold models using transient loadings.
Schmidt, Bastian; Leugering, Günter; Stingl, Michael; Hüttner, Björn; Agaimy, Abbas; Döllinger, Michael
2013-08-01
Commonly applied models to study vocal fold vibrations in combination with air flow distributions are self-sustained physical models of the larynx consisting of artificial silicone vocal folds. Choosing appropriate mechanical parameters and layer geometries for these vocal fold models while considering simplifications due to manufacturing restrictions is difficult but crucial for achieving realistic behavior. In earlier work by Schmidt et al. [J. Acoust. Soc. Am. 129, 2168-2180 (2011)], the authors presented an approach in which material parameters of a static numerical vocal fold model were optimized to achieve an agreement of the displacement field with data retrieved from hemilarynx experiments. This method is now generalized to a fully transient setting. Moreover in addition to the material parameters, the extended approach is capable of finding optimized layer geometries. Depending on chosen material restriction, significant modifications of the reference geometry are predicted. The additional flexibility in the design space leads to a significantly more realistic deformation behavior. At the same time, the predicted biomechanical and geometrical results are still feasible for manufacturing physical vocal fold models consisting of several silicone layers. As a consequence, the proposed combined experimental and numerical method is suited to guide the construction of physical vocal fold models.
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
NASA Astrophysics Data System (ADS)
Vu, Tuan V.; Papavassiliou, Dimitrios V.
2018-05-01
In order to investigate the interfacial region between oil and water with the presence of surfactants using coarse-grained computations, both the interaction between different components of the system and the number of surfactant molecules present at the interface play an important role. However, in many prior studies, the amount of surfactants used was chosen rather arbitrarily. In this work, a systematic approach to develop coarse-grained models for anionic surfactants (such as sodium dodecyl sulfate) and nonionic surfactants (such as octaethylene glycol monododecyl ether) in oil-water interfaces is presented. The key is to place the theoretically calculated number of surfactant molecules on the interface at the critical micelle concentration. Based on this approach, the molecular description of surfactants and the effects of various interaction parameters on the interfacial tension are investigated. The results indicate that the interfacial tension is affected mostly by the head-water and tail-oil interaction. Even though the procedure presented herein is used with dissipative particle dynamics models, it can be applied for other coarse-grained methods to obtain the appropriate set of parameters (or force fields) to describe the surfactant behavior on the oil-water interface.
A simple strategy for varying the restart parameter in GMRES(m)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Kolev, T V
2007-10-02
When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.
Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation
NASA Astrophysics Data System (ADS)
Liu, Hua; Krishnamachari, Bhaskar
2008-08-01
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment
NASA Astrophysics Data System (ADS)
Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav
2010-05-01
Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
New method to design stellarator coils without the winding surface
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-01-01
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.
Multi-objective design of fuzzy logic controller in supply chain
NASA Astrophysics Data System (ADS)
Ghane, Mahdi; Tarokh, Mohammad Jafar
2012-08-01
Unlike commonly used methods, in this paper, we have introduced a new approach for designing fuzzy controllers. In this approach, we have simultaneously optimized both objective functions of a supply chain over a two-dimensional space. Then, we have obtained a spectrum of optimized points, each of which represents a set of optimal parameters which can be chosen by the manager according to the importance of objective functions. Our used supply chain model is a member of inventory and order-based production control system family, a generalization of the periodic review which is termed `Order-Up-To policy.' An auto rule maker, based on non-dominated sorting genetic algorithm-II, has been applied to the experimental initial fuzzy rules. According to performance measurement, our results indicate the efficiency of the proposed approach.
Thiry, Justine; Lebrun, Pierre; Vinassa, Chloe; Adam, Marine; Netchacovitch, Lauranne; Ziemons, Eric; Hubert, Philippe; Krier, Fabrice; Evrard, Brigitte
2016-12-30
The purpose of this work was to increase the solubility and the dissolution rate of itraconazole, which was chosen as the model drug, by obtaining an amorphous solid dispersion by hot melt extrusion. Therefore, an initial preformulation study was conducted using differential scanning calorimetry, thermogravimetric analysis and Hansen's solubility parameters in order to find polymers which would have the ability to form amorphous solid dispersions with itraconazole. Afterwards, the four polymers namely Kollidon ® VA64, Kollidon ® 12PF, Affinisol ® HPMC and Soluplus ® , that met the set criteria were used in hot melt extrusion along with 25wt.% of itraconazole. Differential scanning confirmed that all four polymers were able to amorphize itraconazole. A stability study was then conducted in order to see which polymer would keep itraconazole amorphous as long as possible. Soluplus ® was chosen and, the formulation was fine-tuned by adding some excipients (AcDiSol ® , sodium bicarbonate and poloxamer) during the hot melt extrusion process in order to increase the release rate of itraconazole. In parallel, the range limits of the hot melt extrusion process parameters were determined. A design of experiment was performed within the previously defined ranges in order to optimize simultaneously the formulation and the process parameters. The optimal formulation was the one containing 2.5wt.% of AcDiSol ® produced at 155°C and 100rpm. When tested with a biphasic dissolution test, more than 80% of itraconazole was released in the organic phase after 8h. Moreover, this formulation showed the desired thermoformability value. From these results, the design space around the optimum was determined. It corresponds to the limits within which the process would give the optimized product. It was observed that a temperature between 155 and 170°C allowed a high flexibility on the screw speed, from about 75 to 130rpm. Copyright © 2016 Elsevier B.V. All rights reserved.
Geminal embedding scheme for optimal atomic basis set construction in correlated calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorella, S., E-mail: sorella@sissa.it; Devaux, N.; Dagrada, M., E-mail: mario.dagrada@impmc.upmc.fr
2015-12-28
We introduce an efficient method to construct optimal and system adaptive basis sets for use in electronic structure and quantum Monte Carlo calculations. The method is based on an embedding scheme in which a reference atom is singled out from its environment, while the entire system (atom and environment) is described by a Slater determinant or its antisymmetrized geminal power (AGP) extension. The embedding procedure described here allows for the systematic and consistent contraction of the primitive basis set into geminal embedded orbitals (GEOs), with a dramatic reduction of the number of variational parameters necessary to represent the many-body wavemore » function, for a chosen target accuracy. Within the variational Monte Carlo method, the Slater or AGP part is determined by a variational minimization of the energy of the whole system in presence of a flexible and accurate Jastrow factor, representing most of the dynamical electronic correlation. The resulting GEO basis set opens the way for a fully controlled optimization of many-body wave functions in electronic structure calculation of bulk materials, namely, containing a large number of electrons and atoms. We present applications on the water molecule, the volume collapse transition in cerium, and the high-pressure liquid hydrogen.« less
Determining the full halo coronal mass ejection characteristics
NASA Astrophysics Data System (ADS)
Fainshtein, V. G.
2009-03-01
In this paper we determined the parameters of 45 full halo coronal mass ejections (HCMEs) for various modifications of their cone forms (“ice cream cone models”). We show that the CME determined characteristics depend significantly on the CME chosen form. We show that, regardless of the CME chosen form, the trajectory of practically all the considered HCMEs deviate from the radial direction to the Sun-to-Earth axis at the initial stage of their movement.
Probabilistic SSME blades structural response under random pulse loading
NASA Technical Reports Server (NTRS)
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
NASA Astrophysics Data System (ADS)
Nkuissi Tchognia, Joël Hervé; Hartiti, Bouchaib; Ridah, Abderraouf; Ndjaka, Jean-Marie; Thevenin, Philippe
2016-07-01
Present research deals with the optimal deposition parameters configuration for the synthesis of Cu2ZnSnS4 (CZTS) thin films using the sol-gel method associated to spin coating on ordinary glass substrates without sulfurization. The Taguchi design with a L9 (34) orthogonal array, a signal-to-noise (S/N) ratio and an analysis of variance (ANOVA) are used to optimize the performance characteristic (optical band gap) of CZTS thin films. Four deposition parameters called factors namely the annealing temperature, the annealing time, the ratios Cu/(Zn + Sn) and Zn/Sn were chosen. To conduct the tests using the Taguchi method, three levels were chosen for each factor. The effects of the deposition parameters on structural and optical properties are studied. The determination of the most significant factors of the deposition process on optical properties of as-prepared films is also done. The results showed that the significant parameters are Zn/Sn ratio and the annealing temperature by applying the Taguchi method.
NASA Astrophysics Data System (ADS)
Ishak, M.; Noordin, N. F. M.; Shah, L. H.
2015-12-01
Proper selection of the welding parameters can result in better joining. In this study, the effects of various welding parameters on tensile strength in joining dissimilar aluminum alloys AA6061-T6 and AA7075-T6 were investigated. 2 mm thick samples of both base metals were welded by semi-automatic gas metal arc welding (GMAW) using filler wire ER5356. The welding current, arc voltage and welding speed were chosen as variables parameters. The strength of each specimen after the welding operations were tested and the effects of these parameters on tensile strength were identified by using Taguchi method. The range of parameter for welding current were chosen from 100 to 115 A, arc voltage from 17 to 20 V and welding speed from 2 to 5 mm/s. L16 orthogonal array was used to obtained 16 runs of experiments. It was found that the highest tensile strength (194.34 MPa) was obtained with the combination of a welding current of 115 A, welding voltage of 18 V and welding speed of 4 mm/s. Through analysis of variance (ANOVA), the welding voltage was the most effected parameter on tensile strength with percentage of contribution at 41.30%.
Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M
2014-02-01
We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use (13) C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Copyright © 2013 Wiley Periodicals, Inc.
Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs)
Howsmon, Daniel P.; Cameron, Faye; Baysal, Nihat; Ly, Trang T.; Forlenza, Gregory P.; Maahs, David M.; Buckingham, Bruce A.; Hahn, Juergen; Bequette, B. Wayne
2017-01-01
Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis—a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios. PMID:28098839
Continuous Glucose Monitoring Enables the Detection of Losses in Infusion Set Actuation (LISAs).
Howsmon, Daniel P; Cameron, Faye; Baysal, Nihat; Ly, Trang T; Forlenza, Gregory P; Maahs, David M; Buckingham, Bruce A; Hahn, Juergen; Bequette, B Wayne
2017-01-15
Reliable continuous glucose monitoring (CGM) enables a variety of advanced technology for the treatment of type 1 diabetes. In addition to artificial pancreas algorithms that use CGM to automate continuous subcutaneous insulin infusion (CSII), CGM can also inform fault detection algorithms that alert patients to problems in CGM or CSII. Losses in infusion set actuation (LISAs) can adversely affect clinical outcomes, resulting in hyperglycemia due to impaired insulin delivery. Prolonged hyperglycemia may lead to diabetic ketoacidosis-a serious metabolic complication in type 1 diabetes. Therefore, an algorithm for the detection of LISAs based on CGM and CSII signals was developed to improve patient safety. The LISA detection algorithm is trained retrospectively on data from 62 infusion set insertions from 20 patients. The algorithm collects glucose and insulin data, and computes relevant fault metrics over two different sliding windows; an alarm sounds when these fault metrics are exceeded. With the chosen algorithm parameters, the LISA detection strategy achieved a sensitivity of 71.8% and issued 0.28 false positives per day on the training data. Validation on two independent data sets confirmed that similar performance is seen on data that was not used for training. The developed algorithm is able to effectively alert patients to possible infusion set failures in open-loop scenarios, with limited evidence of its extension to closed-loop scenarios.
A NARX damper model for virtual tuning of automotive suspension systems with high-frequency loading
NASA Astrophysics Data System (ADS)
Alghafir, M. N.; Dunne, J. F.
2012-02-01
A computationally efficient NARX-type neural network model is developed to characterise highly nonlinear frequency-dependent thermally sensitive hydraulic dampers for use in the virtual tuning of passive suspension systems with high-frequency loading. Three input variables are chosen to account for high-frequency kinematics and temperature variations arising from continuous vehicle operation over non-smooth surfaces such as stone-covered streets, rough or off-road conditions. Two additional input variables are chosen to represent tuneable valve parameters. To assist in the development of the NARX model, a highly accurate but computationally excessive physical damper model [originally proposed by S. Duym and K. Reybrouck, Physical characterization of non-linear shock absorber dynamics, Eur. J. Mech. Eng. M 43(4) (1998), pp. 181-188] is extended to allow for high-frequency input kinematics. Experimental verification of this extended version uses measured damper data obtained from an industrial damper test machine under near-isothermal conditions for fixed valve settings, with input kinematics corresponding to harmonic and random road profiles. The extended model is then used only for simulating data for training and testing the NARX model with specified temperature profiles and different valve parameters, both in isolation and within quarter-car vehicle simulations. A heat generation and dissipation model is also developed and experimentally verified for use within the simulations. Virtual tuning using the quarter-car simulation model then exploits the NARX damper to achieve a compromise between ride and handling under transient thermal conditions with harmonic and random road profiles. For quarter-car simulations, the paper shows that a single tuneable NARX damper makes virtual tuning computationally very attractive.
The MIT IGSM-CAM framework for uncertainty studies in global and regional climate change
NASA Astrophysics Data System (ADS)
Monier, E.; Scott, J. R.; Sokolov, A. P.; Forest, C. E.; Schlosser, C. A.
2011-12-01
The MIT Integrated Global System Model (IGSM) version 2.3 is an intermediate complexity fully coupled earth system model that allows simulation of critical feedbacks among its various components, including the atmosphere, ocean, land, urban processes and human activities. A fundamental feature of the IGSM2.3 is the ability to modify its climate parameters: climate sensitivity, net aerosol forcing and ocean heat uptake rate. As such, the IGSM2.3 provides an efficient tool for generating probabilistic distribution functions of climate parameters using optimal fingerprint diagnostics. A limitation of the IGSM2.3 is its zonal-mean atmosphere model that does not permit regional climate studies. For this reason, the MIT IGSM2.3 was linked to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM) version 3 and new modules were developed and implemented in CAM in order to modify its climate sensitivity and net aerosol forcing to match that of the IGSM. The IGSM-CAM provides an efficient and innovative framework to study regional climate change where climate parameters can be modified to span the range of uncertainty and various emissions scenarios can be tested. This paper presents results from the cloud radiative adjustment method used to modify CAM's climate sensitivity. We also show results from 21st century simulations based on two emissions scenarios (a median "business as usual" scenario where no policy is implemented after 2012 and a policy scenario where greenhouse-gas are stabilized at 660 ppm CO2-equivalent concentrations by 2100) and three sets of climate parameters. The three values of climate sensitivity chosen are median and the bounds of the 90% probability interval of the probability distribution obtained by comparing the observed 20th century climate change with simulations by the IGSM with a wide range of climate parameters values. The associated aerosol forcing values were chosen to ensure a good agreement of the simulations with the observed climate change over the 20th century. Because the concentrations of sulfate aerosols significantly decrease over the 21st century in both emissions scenarios, climate changes obtained in these six simulations provide a good approximation for the median, and the 5th and 95th percentiles of the probability distribution of 21st century climate change.
Romm, S
1989-01-01
Beautiful faces, like clothing and body conformation, go in and out of fashion. Yet, certain women in every era are considered truly beautiful. Who, then, sets standards of facial beauty and how are women chosen as representative of an ideal? Identifying great beauties is easier than explaining why they are chosen, but answers to these elusive questions are suggested in art, literature, and a review of past events.
Accessing the molecular frame through strong-field alignment of distributions of gas phase molecules
NASA Astrophysics Data System (ADS)
Reid, Katharine L.
2018-03-01
A rationale for creating highly aligned distributions of molecules is that it enables vector properties referenced to molecule-fixed axes (the molecular frame) to be determined. In the present work, the degree of alignment that is necessary for this to be achieved in practice is explored. Alignment is commonly parametrized in experiments by a single parameter, ?, which is insufficient to enable predictive calculations to be performed. Here, it is shown that, if the full distribution of molecular axes takes a Gaussian form, this single parameter can be used to determine the complete set of alignment moments needed to characterize the distribution. In order to demonstrate the degree of alignment that is required to approach the molecular frame, the alignment moments corresponding to a few chosen values of ? are used to project a model molecular frame photoelectron angular distribution into the laboratory frame. These calculations show that ? needs to approach 0.9 in order to avoid significant blurring to be caused by averaging. This article is part of the theme issue `Modern theoretical chemistry'.
Automated gait and balance parameters diagnose and correlate with severity in Parkinson disease.
Dewey, D Campbell; Miocinovic, Svjetlana; Bernstein, Ira; Khemani, Pravin; Dewey, Richard B; Querry, Ross; Chitnis, Shilpa; Dewey, Richard B
2014-10-15
To assess the suitability of instrumented gait and balance measures for diagnosis and estimation of disease severity in PD. Each subject performed iTUG (instrumented Timed-Up-and-Go) and iSway (instrumented Sway) using the APDM(®) Mobility Lab. MDS-UPDRS parts II and III, a postural instability and gait disorder (PIGD) score, the mobility subscale of the PDQ-39, and Hoehn & Yahr stage were measured in the PD cohort. Two sets of gait and balance variables were defined by high correlation with diagnosis or disease severity and were evaluated using multiple linear and logistic regressions, ROC analyses, and t-tests. 135 PD subjects and 66 age-matched controls were evaluated in this prospective cohort study. We found that both iTUG and iSway variables differentiated PD subjects from controls (area under the ROC curve was 0.82 and 0.75 respectively) and correlated with all PD severity measures (R(2) ranging from 0.18 to 0.61). Objective exam-based scores correlated more strongly with iTUG than iSway. The chosen set of iTUG variables was abnormal in very mild disease. Age and gender influenced gait and balance parameters and were therefore controlled in all analyses. Our study identified sets of iTUG and iSway variables which correlate with PD severity measures and differentiate PD subjects from controls. These gait and balance measures could potentially serve as markers of PD progression and are under evaluation for this purpose in the ongoing NIH Parkinson Disease Biomarker Program. Copyright © 2014 Elsevier B.V. All rights reserved.
Automated Gait and Balance Parameters Diagnose and Correlate with Severity in Parkinson Disease
Dewey, Daniel C.; Miocinovic, Svjetlana; Bernstein, Ira; Khemani, Pravin; Dewey, Richard B.; Querry, Ross; Chitnis, Shilpa; Dewey, Richard B.
2014-01-01
Objective To assess the suitability of instrumented gait and balance measures for diagnosis and estimation of disease severity in PD. Methods Each subject performed iTUG (instrumented Timed-Up-and-Go) and iSway (instrumented Sway) using the APDM® Mobility Lab. MDS-UPDRS parts II and III, a postural instability and gait disorder (PIGD) score, the mobility subscale of the PDQ-39, and Hoehn & Yahr stage were measured in the PD cohort. Two sets of gait and balance variables were defined by high correlation with diagnosis or disease severity and were evaluated using multiple linear and logistic regressions, ROC analyses, and t-tests. Results 135 PD subjects and 66 age-matched controls were evaluated in this prospective cohort study. We found that both iTUG and iSway variables differentiated PD subjects from controls (area under the ROC curve was 0.82 and 0.75 respectively) and correlated with all PD severity measures (R2 ranging from 0.18 to 0.61). Objective exam-based scores correlated more strongly with iTUG than iSway. The chosen set of iTUG variables was abnormal in very mild disease. Age and gender influenced gait and balance parameters and were therefore controlled in all analyses. Interpretation Our study identified sets of iTUG and iSway variables which correlate with PD severity measures and differentiate PD subjects from controls. These gait and balance measures could potentially serve as markers of PD progression and are under evaluation for this purpose in the ongoing NIH Parkinson Disease Biomarker Program. PMID:25082782
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
Information processing in dendrites I. Input pattern generalisation.
Gurney, K N
2001-10-01
In this paper and its companion, we address the question as to whether there are any general principles underlying information processing in the dendritic trees of biological neurons. In order to address this question, we make two assumptions. First, the key architectural feature of dendrites responsible for many of their information processing abilities is the existence of independent sub-units performing local non-linear processing. Second, any general functional principles operate at a level of abstraction in which neurons are modelled by Boolean functions. To accommodate these assumptions, we therefore define a Boolean model neuron-the multi-cube unit (MCU)-which instantiates the notion of the discrete functional sub-unit. We then use this model unit to explore two aspects of neural functionality: generalisation (in this paper) and processing complexity (in its companion). Generalisation is dealt with from a geometric viewpoint and is quantified using a new metric-the set of order parameters. These parameters are computed for threshold logic units (TLUs), a class of random Boolean functions, and MCUs. Our interpretation of the order parameters is consistent with our knowledge of generalisation in TLUs and with the lack of generalisation in randomly chosen functions. Crucially, the order parameters for MCUs imply that these functions possess a range of generalisation behaviour. We argue that this supports the general thesis that dendrites facilitate input pattern generalisation despite any local non-linear processing within functionally isolated sub-units.
Analysis of fundamental parameters for V477 Lyr
NASA Astrophysics Data System (ADS)
Shimansky, V. V.; Pozdnyakova, S. A.; Borisov, N. V.; Bikmaev, I. F.; Galeev, A. I.; Sakhibullin, N. A.; Spiridonova, O. I.
2008-06-01
We analyze the photometric and spectroscopic observations of the young pre-cataclysmic variable (pre-CV) V477 Lyr. The masses of both binary components have been corrected by analyzing their radial velocity curves. We show that agreement between the theoretical and observed light curves of the object is possible for several sets of its physical parameters corresponding to the chosen temperature of the primary component. The final parameters of V477 Lyr have been established by comparing observational data with evolutionary tracks for planetary nebula nuclei. The derived effective temperature of the O subdwarf is higher than that estimated by analyzing the object’s ultraviolet spectra by more than 10000 K. This is in agreement with the analogous results obtained previously for the young pre-CVs V664 Cas and UU Sge. The secondary component of V477 Lyr has been proven to have a more than 25-fold luminosity excess compared to main-sequence stars of similar mass. Comparison of the physical parameters for the cool stars in young pre-CVs indicates that their luminosities do not correlate with the masses of the objects. The observed luminosity excesses in such stars show a close correlation with the post-common-envelope lifetime of the systems and should be investigated within the framework of the theory of their relaxation to the state of main-sequence stars.
Maximal compression of the redshift-space galaxy power spectrum and bispectrum
NASA Astrophysics Data System (ADS)
Gualdi, Davide; Manera, Marc; Joachimi, Benjamin; Lahav, Ofer
2018-05-01
We explore two methods of compressing the redshift-space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. Both methods involve reducing the dimension of the original data vector (e.g. 1000 elements) to the number of cosmological parameters considered (e.g. seven ) using the Karhunen-Loève algorithm. In the first case, we run MCMC sampling on the compressed data vector in order to recover the 1D and 2D posterior distributions. The second option, approximately 2000 times faster, works by orthogonalizing the parameter space through diagonalization of the Fisher information matrix before the compression, obtaining the posterior distributions without the need of MCMC sampling. Using these methods for future spectroscopic redshift surveys like DESI, Euclid, and PFS would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. We consider a redshift bin of a DESI-like experiment. Using the power spectrum combined with the bispectrum as a data vector, both compression methods on average recover the 68 {per cent} credible regions to within 0.7 {per cent} and 2 {per cent} of those resulting from standard MCMC sampling, respectively. These confidence intervals are also smaller than the ones obtained using only the power spectrum by 81 per cent, 80 per cent, and 82 per cent respectively, for the bias parameter b1, the growth rate f, and the scalar amplitude parameter As.
NASA Technical Reports Server (NTRS)
Ocasio, W. C.; Rigney, D. R.; Clark, K. P.; Mark, R. G.; Goldberger, A. L. (Principal Investigator)
1993-01-01
We describe the theory and computer implementation of a newly-derived mathematical model for analyzing the shape of blood pressure waveforms. Input to the program consists of an ECG signal, plus a single continuous channel of peripheral blood pressure, which is often obtained invasively from an indwelling catheter during intensive-care monitoring or non-invasively from a tonometer. Output from the program includes a set of parameter estimates, made for every heart beat. Parameters of the model can be interpreted in terms of the capacitance of large arteries, the capacitance of peripheral arteries, the inertance of blood flow, the peripheral resistance, and arterial pressure due to basal vascular tone. Aortic flow due to contraction of the left ventricle is represented by a forcing function in the form of a descending ramp, the area under which represents the stroke volume. Differential equations describing the model are solved by the method of Laplace transforms, permitting rapid parameter estimation by the Levenberg-Marquardt algorithm. Parameter estimates and their confidence intervals are given in six examples, which are chosen to represent a variety of pressure waveforms that are observed during intensive-care monitoring. The examples demonstrate that some of the parameters may fluctuate markedly from beat to beat. Our program will find application in projects that are intended to correlate the details of the blood pressure waveform with other physiological variables, pathological conditions, and the effects of interventions.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
NASA Astrophysics Data System (ADS)
Balakin, Alexander B.; Bochkarev, Vladimir V.; Lemos, José P. S.
2008-04-01
Using a Lagrangian formalism, a three-parameter nonminimal Einstein-Maxwell theory is established. The three parameters q1, q2, and q3 characterize the cross-terms in the Lagrangian, between the Maxwell field and terms linear in the Ricci scalar, Ricci tensor, and Riemann tensor, respectively. Static spherically symmetric equations are set up, and the three parameters are interrelated and chosen so that effectively the system reduces to a one parameter only, q. Specific black hole and other type of one-parameter solutions are studied. First, as a preparation, the Reissner-Nordström solution, with q1=q2=q3=0, is displayed. Then, we search for solutions in which the electric field is regular everywhere as well as asymptotically Coulombian, and the metric potentials are regular at the center as well as asymptotically flat. In this context, the one-parameter model with q1≡-q, q2=2q, q3=-q, called the Gauss-Bonnet model, is analyzed in detail. The study is done through the solution of the Abel equation (the key equation), and the dynamical system associated with the model. There is extra focus on an exact solution of the model and its critical properties. Finally, an exactly integrable one-parameter model, with q1≡-q, q2=q, q3=0, is considered also in detail. A special submodel, in which the Fibonacci number appears naturally, of this one-parameter model is shown, and the corresponding exact solution is presented. Interestingly enough, it is a soliton of the theory, the Fibonacci soliton, without horizons and with a mild conical singularity at the center.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Borisov, S B; Shpykov, A S; Terent'eva, N A
2007-01-01
The paper analyzes the impact of various millimeter-range electromagnetic radiation schedules on immunological parameters in 152 patients with new-onset respiratory sarcoidosis. It shows that the immunomodulatory effect of millimeter-range therapy depends on the treatment regimen chosen. There is evidence for the advantages of millimeter-range noise electromagnetic radiation.
Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.
2015-01-01
Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972
New method to design stellarator coils without the winding surface
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2017-11-06
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
Two-actor conflict with time delay: A dynamical model
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Muneepeerakul, Rachata
2012-11-01
Recent mathematical dynamical models of the conflict between two different actors, be they nations, groups, or individuals, have been developed that are capable of predicting various outcomes depending on the chosen feedback strategies, initial conditions, and the previous states of the actors. In addition to these factors, this paper examines the effect of time delayed feedback on the conflict dynamics. Our analysis shows that under certain initial and feedback conditions, a stable neutral equilibrium of conflict may destabilize for some critical values of time delay, and the two actors may evolve to new emotional states. We investigate the results by constructing critical delay surfaces for different sets of parameters and analyzing results from numerical simulations. These results provide new insights regarding conflict and conflict resolution and may help planners in adjusting and assessing their strategic decisions.
Trojanowicz, Karol; Wójcik, Włodzimierz
2011-01-01
The article presents a case-study on the calibration and verification of mathematical models of organic carbon removal kinetics in biofilm. The chosen Harremöes and Wanner & Reichert models were calibrated with a set of model parameters obtained both during dedicated studies conducted at pilot- and lab-scales for petrochemical wastewater conditions and from the literature. Next, the models were successfully verified through studies carried out utilizing a pilot ASFBBR type bioreactor installed in an oil-refinery wastewater treatment plant. During verification the pilot biofilm reactor worked under varying surface organic loading rates (SOL), dissolved oxygen concentrations and temperatures. The verification proved that the models can be applied in practice to petrochemical wastewater treatment engineering for e.g. biofilm bioreactor dimensioning.
Nonparaxial rogue waves in optical Kerr media.
Temgoua, D D Estelle; Kofane, T C
2015-06-01
We consider the inhomogeneous nonparaxial nonlinear Schrödinger (NLS) equation with varying dispersion, nonlinearity, and nonparaxiality coefficients, which governs the nonlinear wave propagation in an inhomogeneous optical fiber system. We present the similarity and Darboux transformations and for the chosen specific set of parameters and free functions, the first- and second-order rational solutions of the nonparaxial NLS equation are generated. In particular, the features of rogue waves throughout polynomial and Jacobian elliptic functions are analyzed, showing the nonparaxial effects. It is shown that the nonparaxiality increases the intensity of rogue waves by increasing the length and reducing the width simultaneously, by the way it increases their speed and penalizes interactions between them. These properties and the characteristic controllability of the nonparaxial rogue waves may give another opportunity to perform experimental realizations and potential applications in optical fibers.
New method to design stellarator coils without the winding surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less
Liu, Sheng; Xie, Jun; Chen, Xiangqing; Yang, Liqiang; Su, Dan; Fang, Yan; Yu, Na; Fang, Wei
2010-02-01
To optimize the formula of Glycyrrhiza flavonoid and ferulic acid cream and set up its quality control parameters. Reflect-line orthogonal simplex method was used to optimize the main factors such as amount of Myrj52-glyceryl monostearate and dimethicone, based on the appearance, spreadability and stability of the cream. 9.0% Myrj52-glyceryl monostearate (3:2) and 2.5% dimethicone were chosen in prescription. The prepared cream presented a good stability after being placed 24 h at 5 degrees C, 25 degrees C and 37 degrees C respectively,and its spreadability suited with the property of semi-fluid cream. [corrected] The formula of Glycyrrhiza flavonoid and ferulic acid cream is suitable, and its quality is stable. The reflect-line orthogonal simplex method is suitable for the formula optimization of cream.
A gyrokinetic one-dimensional scrape-off layer model of an edge-localized mode heat pulse
Shi, E. L.; Hakim, A. H.; Hammett, G. W.
2015-02-03
An electrostatic gyrokinetic-based model is applied to simulate parallel plasma transport in the scrape-off layer to a divertor plate. We focus on a test problem that has been studied previously, using parameters chosen to model a heat pulse driven by an edge-localized mode in JET. Previous work has used direct particle-in-cellequations with full dynamics, or Vlasov or fluid equations with only parallel dynamics. With the use of the gyrokinetic quasineutrality equation and logical sheathboundary conditions, spatial and temporal resolution requirements are no longer set by the electron Debye length and plasma frequency, respectively. Finally, this test problem also helps illustratemore » some of the physics contained in the Hamiltonian form of the gyrokineticequations and some of the numerical challenges in developing an edge gyrokinetic code.« less
At what wavelengths should we search for signals from extraterrestrial intelligence?
Townes, C. H.
1983-01-01
It has often been concluded that searches for extraterrestrial intelligence (SETI) should concentrate on attempts to receive signals in the microwave region, the argument being given that communication can occur there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered—in particular, photon detection rather than linear detection alone—and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone may be chosen for optimization by an extraterrestrial civilization. Hence, while partially satisfying arguments may be given about optimal wavelengths for a search for signals from extraterrestrial intelligence, considerable uncertainty must remain. PMID:16593279
At what wavelengths should we search for signals from extraterrestrial intelligence?
Townes, C H
1983-02-01
It has often been concluded that searches for extraterrestrial intelligence (SETI) should concentrate on attempts to receive signals in the microwave region, the argument being given that communication can occur there at minimum broadcasted power. Such a conclusion is shown to result only under a restricted set of assumptions. If generalized types of detection are considered-in particular, photon detection rather than linear detection alone-and if advantage is taken of the directivity of telescopes at short wavelengths, then somewhat less power is required for communication at infrared wavelengths than in the microwave region. Furthermore, a variety of parameters other than power alone may be chosen for optimization by an extraterrestrial civilization. Hence, while partially satisfying arguments may be given about optimal wavelengths for a search for signals from extraterrestrial intelligence, considerable uncertainty must remain.
A portable device for detecting fruit quality by diffuse reflectance Vis/NIR spectroscopy
NASA Astrophysics Data System (ADS)
Sun, Hongwei; Peng, Yankun; Li, Peng; Wang, Wenxiu
2017-05-01
Soluble solid content (SSC) is a major quality parameter to fruit, which has influence on its flavor or texture. Some researches on the on-line non-invasion detection of fruit quality were published. However, consumers desire portable devices currently. This study aimed to develop a portable device for accurate, real-time and nondestructive determination of quality factors of fruit based on diffuse reflectance Vis/NIR spectroscopy (520-950 nm). The hardware of the device consisted of four units: light source unit, spectral acquisition unit, central processing unit, display unit. Halogen lamp was chosen as light source. When working, its hand-held probe was in contact with the surface of fruit samples thus forming dark environment to shield the interferential light outside. Diffuse reflectance light was collected and measured by spectrometer (USB4000). ARM (Advanced RISC Machines), as central processing unit, controlled all parts in device and analyzed spectral data. Liquid Crystal Display (LCD) touch screen was used to interface with users. To validate its reliability and stability, 63 apples were tested in experiment, 47 of which were chosen as calibration set, while others as prediction set. Their SSC reference values were measured by refractometer. At the same time, samples' spectral data acquired by portable device were processed by standard normalized variables (SNV) and Savitzky-Golay filter (S-G) to eliminate the spectra noise. Then partial least squares regression (PLSR) was applied to build prediction models, and the best predictions results was achieved with correlation coefficient (r) of 0.855 and standard error of 0.6033° Brix. The results demonstrated that this device was feasible to quantitatively analyze soluble solid content of apple.
The trade-off between morphology and control in the co-optimized design of robots.
Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.
An Approach to Remove the Systematic Bias from the Storm Surge forecasts in the Venice Lagoon
NASA Astrophysics Data System (ADS)
Canestrelli, A.
2017-12-01
In this work a novel approach is proposed for removing the systematic bias from the storm surge forecast computed by a two-dimensional shallow-water model. The model covers both the Adriatic and Mediterranean seas and provides the forecast at the entrance of the Venice Lagoon. The wind drag coefficient at the water-air interface is treated as a calibration parameter, with a different value for each range of wind velocities and wind directions. This sums up to a total of 16-64 parameters to be calibrated, depending on the chosen resolution. The best set of parameters is determined by means of an optimization procedure, which minimizes the RMS error between measured and modeled water level in Venice for the period 2011-2015. It is shown that a bias is present, for which the peaks of wind velocities provided by the weather forecast are largely underestimated, and that the calibration procedure removes this bias. When the calibrated model is used to reproduce events not included in the calibration dataset, the forecast error is strongly reduced, thus confirming the quality of our procedure. The proposed approach it is not site-specific and could be applied to different situations, such as storm surges caused by intense hurricanes.
Detecting microsatellites within genomes: significant variation among algorithms.
Leclercq, Sébastien; Rivals, Eric; Jarne, Philippe
2007-04-18
Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker). Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster) spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp), regardless of motif. Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.
Detecting microsatellites within genomes: significant variation among algorithms
Leclercq, Sébastien; Rivals, Eric; Jarne, Philippe
2007-01-01
Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker). Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster) spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp), regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions. PMID:17442102
The trade-off between morphology and control in the co-optimized design of robots
Iida, Fumiya
2017-01-01
Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482
Transition from a conservative system to a quasi-dissipative one
NASA Astrophysics Data System (ADS)
Ding, Xiao-Ling; Lu, Yun-Qing; Jiang, Yu-Mei; Chen, He-Sheng; He, Da-Ren
2002-03-01
A quasi-dissipative system can display some dissipative properties and also some conservative properties. Such a system can be realized by a discontinuous and noninvertible two-dimensional area-preserving map. The first example is a model of an electronic relaxation oscillator with over-voltage protection^1. When the system gradually changes from the state without over-voltage protection to the state with protection, it displays a transition from a conservative system to a quasi-dissipative one. Firstly, with a chosen group of parameters, a stochastic web formed by the image set of the discontinuous borderline of the system function becomes chaotic supertransients. The chaotic motion in the web escapes to some elliptic islands. Then, as the over-voltage protection increases, the image set gradually loses the characteristics of a web. More and more it looks like a typical chaotic attractor in a dissipative system. Some other phenomena those happened only in dissipative systems, such as crisis and intermittency, can be also observed in this case. Such a transition can be found also in a kicked rotator. ^1 J. Wang et al., Phys.Rev.E, 64(2001)026202.
NASA Astrophysics Data System (ADS)
Akhmedova, Sh; Semenkin, E.
2017-02-01
Previously, a meta-heuristic approach, called Co-Operation of Biology-Related Algorithms or COBRA, for solving real-parameter optimization problems was introduced and described. COBRA’s basic idea consists of a cooperative work of five well-known bionic algorithms such as Particle Swarm Optimization, the Wolf Pack Search, the Firefly Algorithm, the Cuckoo Search Algorithm and the Bat Algorithm, which were chosen due to the similarity of their schemes. The performance of this meta-heuristic was evaluated on a set of test functions and its workability was demonstrated. Thus it was established that the idea of the algorithms’ cooperative work is useful. However, it is unclear which bionic algorithms should be included in this cooperation and how many of them. Therefore, the five above-listed algorithms and additionally the Fish School Search algorithm were used for the development of five different modifications of COBRA by varying the number of component-algorithms. These modifications were tested on the same set of functions and the best of them was found. Ways of further improving the COBRA algorithm are then discussed.
Swarm formation control utilizing elliptical surfaces and limiting functions.
Barnes, Laura E; Fields, Mary Anne; Valavanis, Kimon P
2009-12-01
In this paper, we present a strategy for organizing swarms of unmanned vehicles into a formation by utilizing artificial potential fields that were generated from normal and sigmoid functions. These functions construct the surface on which swarm members travel, controlling the overall swarm geometry and the individual member spacing. Nonlinear limiting functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables that force the swarm to behave according to set constraints, formation, and member spacing. The artificial potential functions and limiting functions are combined to control swarm formation, orientation, and swarm movement as a whole. Parameters are chosen based on desired formation and user-defined constraints. This approach is computationally efficient and scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models. Simulation results are presented for a swarm of 10 and 40 robots that follow circle, ellipse, and wedge formations. Experimental results are included to demonstrate the applicability of the approach on a swarm of four custom-built unmanned ground vehicles (UGVs).
NASA Astrophysics Data System (ADS)
Tiwary, PremPyari; Sharma, Swati; Sharma, Prachi; Singh, Ram Kishor; Uma, R.; Sharma, R. P.
2016-12-01
This paper presents the spatio-temporal evolution of magnetic field due to the nonlinear coupling between fast magnetosonic wave (FMSW) and low frequency slow Alfvén wave (SAW). The dynamical equations of finite frequency FMSW and SAW in the presence of ponderomotive force of FMSW (pump wave) has been presented. Numerical simulation has been carried out for the nonlinear coupled equations of finite frequency FMSW and SAW. A systematic scan of the nonlinear behavior/evolution of the pump FMSW has been done for one of the set of parameters chosen in this paper, using the coupled dynamical equations. Filamentation of fast magnetosonic wave has been considered to be responsible for the magnetic turbulence during the laser plasma interaction. The results show that the formation and growth of localized structures depend on the background magnetic field but the order of amplification does not get affected by the magnitude of the background magnetic field. In this paper, we have shown the relevance of our model for two different parameters used in laboratory and astrophysical phenomenon. We have used one set of parameters pertaining to experimental observations in the study of fast ignition of laser fusion and hence studied the turbulent structures in stellar environment. The other set corresponds to the study of magnetic field amplification in the clumpy medium surrounding the supernova remnant Cassiopeia A. The results indicate considerable randomness in the spatial structure of the magnetic field profile in both the cases and gives a sufficient indication of turbulence. The turbulent spectra have been studied and the break point has been found around k which is consistent with the observations in both the cases. The nonlinear wave-wave interaction presented in this paper may be important in understanding the turbulence in the laboratory as well as the astrophysical phenomenon.
Determining the Full Halo Coronal Mass Ejection Characteristics
NASA Astrophysics Data System (ADS)
Fainshtein, V. G.
2010-11-01
Observing halo coronal mass ejections (HCMEs) in the coronagraph field of view allows one to only determine the apparent parameters in the plane of the sky. Recently, several methods have been proposed allowing one to find some true geometrical and kinematical parameters of HCMEs. In most cases, a simple cone model was used to describe the CME shape. Observations show that various modifications of the cone model ("ice cream models") are most appropriate for describing the shapes of individual CMEs. This paper uses the method of determining full HCME parameters proposed by the author earlier, for determining the parameters of 45 full HCMEs, with various modifications of their shapes. I show that the determined CME characteristics depend significantly on the chosen CME shape. I conclude that the absence of criteria for a preliminary evaluation of the CME shape is a major source of error in determining the true parameters of a full HCME with any of the known methods. I show that, regardless of the chosen CME form, the trajectory of practically all the HCMEs in question deviate from the radial direction towards the Sun-Earth axis at the initial stage of their movement, and their angular size, on average, significantly exceeds that of all the observable CMEs.
Determination of Watershed Lag Equation for Philippine Hydrology
NASA Astrophysics Data System (ADS)
Cipriano, F. R.; Lagmay, A. M. F. A.; Uichanco, C.; Mendoza, J.; Sabio, G.; Punay, K. N.; Oquindo, M. R.; Horritt, M.
2014-12-01
Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are some of the damages caused by flooding and the country's government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps, different types of data were needed and part of that is calculating hydrological components to come up with an accurate output. This paper presents how an important parameter, the time-to-peak of the watershed (Tp) was calculated. Time-to-peak is defined as the time at which the largest discharge of the watershed occurs. This is computed by using a lag time equation that was developed specifically for the Philippine setting. The equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S), and watershed slope (Y). This approach is based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the time-to-peak. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. Values of Tp from the different sensors were generated from the general lag time equation based on the Natural Resource Conservation Management handbook by the US Department of Agriculture. The calculated Tp values were plotted against the values obtained from the equation L0.8(S+1)0.7/Y0.5. Regression analysis was used to obtain the final equation that would be used to calculate the time-to-peak specifically for rivers in the Philippine setting. The calculated values could then be used as a parameter for modeling different flood scenarios in the country.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
Chang, Yingju; Lai, Juin-Yih; Lee, Duu-Jong
2016-12-01
The standard Gibbs free energy, enthalpy and entropy change data for adsorption equilibrium reported in biosorption literature during January 2013-May2016 were listed. Since the studied biosorption systems are all near-equilibrium processes, the enthalpy and entropy change data evaluated by fitting temperature-dependent free energy data using van Hoff's equation reveal a compensation artifact. Additional confusion is introduced with arbitrarily chosen adsorbate concentration unit in bulk solution that added free energy change of mixing into the reported free energy and enthalpy change data. Different standard states may be chosen for properly describing biosorption processes; however, this makes the general comparison between data from different systems inappropriate. No conclusion should be drawn based on unjustified thermodynamic parameters reported in biosorption studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Varadharajan, Venkatramanan; Vadivel, Sudhan Shanmuga; Ramaswamy, Arulvel; Sundharamurthy, Venkatesaprabhu; Chandrasekar, Priyadharshini
2017-01-01
Tannase production by Aspergillus oryzae using various agro-wastes as substrates by submerged fermentation was studied in this research. The microbe was isolated from degrading corn kernel obtained from the corn fields at Tiruchengode, India. The microbial identification was done using 18S rRNA gene analysis. The agro-wastes chosen for the study were pomegranate rind, Cassia auriculata flower, black gram husk, and tea dust. The process parameters chosen for optimization study were substrate concentration, pH, temperature, and incubation period. During one variable at a time optimization, the pomegranate rind extract produced maximum tannase activity of 138.12 IU/mL and it was chosen as the best substrate for further experiments. The quadratic model was found to be the effective model for prediction of tannase production by A. oryzae. The optimized conditions predicted by response surface methodology (RSM) with genetic algorithm (GA) were 1.996% substrate concentration, pH of 4.89, temperature of 34.91 °C, and an incubation time of 70.65 H with maximum tannase activity of 138.363 IU/mL. The confirmatory experiment under optimized conditions showed tannase activity of 139.22 IU/mL. Hence, RSM-GA pair was successfully used in this study to optimize the process parameters required for the production of tannase using pomegranate rind. © 2015 International Union of Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Abedini, M. J.; Nasseri, M.; Burn, D. H.
2012-04-01
In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen
2012-05-15
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of thismore » work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology ''automatic dose rate and image quality'' (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.« less
Rauch, Phillip; Lin, Pei-Jan Paul; Balter, Stephen; Fukuda, Atsushi; Goode, Allen; Hartwell, Gary; LaFrance, Terry; Nickoloff, Edward; Shepard, Jeff; Strauss, Keith
2012-05-01
Task Group 125 (TG 125) was charged with investigating the functionality of fluoroscopic automatic dose rate and image quality control logic in modern angiographic systems, paying specific attention to the spectral shaping filters and variations in the selected radiologic imaging parameters. The task group was also charged with describing the operational aspects of the imaging equipment for the purpose of assisting the clinical medical physicist with clinical set-up and performance evaluation. Although there are clear distinctions between the fluoroscopic operation of an angiographic system and its acquisition modes (digital cine, digital angiography, digital subtraction angiography, etc.), the scope of this work was limited to the fluoroscopic operation of the systems studied. The use of spectral shaping filters in cardiovascular and interventional angiography equipment has been shown to reduce patient dose. If the imaging control algorithm were programmed to work in conjunction with the selected spectral filter, and if the generator parameters were optimized for the selected filter, then image quality could also be improved. Although assessment of image quality was not included as part of this report, it was recognized that for fluoroscopic imaging the parameters that influence radiation output, differential absorption, and patient dose are also the same parameters that influence image quality. Therefore, this report will utilize the terminology "automatic dose rate and image quality" (ADRIQ) when describing the control logic in modern interventional angiographic systems and, where relevant, will describe the influence of controlled parameters on the subsequent image quality. A total of 22 angiography units were investigated by the task group and of these one each was chosen as representative of the equipment manufactured by GE Healthcare, Philips Medical Systems, Shimadzu Medical USA, and Siemens Medical Systems. All equipment, for which measurement data were included in this report, was manufactured within the three year period from 2006 to 2008. Using polymethylmethacrylate (PMMA) plastic to simulate patient attenuation, each angiographic imaging system was evaluated by recording the following parameters: tube potential in units of kilovolts peak (kVp), tube current in units of milliamperes (mA), pulse width (PW) in units of milliseconds (ms), spectral filtration setting, and patient air kerma rate (PAKR) as a function of the attenuator thickness. Data were graphically plotted to reveal the manner in which the ADRIQ control logic responded to changes in object attenuation. There were similarities in the manner in which the ADRIQ control logic operated that allowed the four chosen devices to be divided into two groups, with two of the systems in each group. There were also unique approaches to the ADRIQ control logic that were associated with some of the systems, and these are described in the report. The evaluation revealed relevant information about the testing procedure and also about the manner in which different manufacturers approach the utilization of spectral filtration, pulsed fluoroscopy, and maximum PAKR limitation. This information should be particularly valuable to the clinical medical physicist charged with acceptance testing and performance evaluation of modern angiographic systems.
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
NASA Astrophysics Data System (ADS)
Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk
2007-02-01
The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.
NASA Astrophysics Data System (ADS)
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T; Dannenberg, J J
2012-10-07
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-01-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states. PMID:23039587
An efficient sampling technique for sums of bandpass functions
NASA Technical Reports Server (NTRS)
Lawton, W. M.
1982-01-01
A well known sampling theorem states that a bandlimited function can be completely determined by its values at a uniformly placed set of points whose density is at least twice the highest frequency component of the function (Nyquist rate). A less familiar but important sampling theorem states that a bandlimited narrowband function can be completely determined by its values at a properly chosen, nonuniformly placed set of points whose density is at least twice the passband width. This allows for efficient digital demodulation of narrowband signals, which are common in sonar, radar and radio interferometry, without the side effect of signal group delay from an analog demodulator. This theorem was extended by developing a technique which allows a finite sum of bandlimited narrowband functions to be determined by its values at a properly chosen, nonuniformly placed set of points whose density can be made arbitrarily close to the sum of the passband widths.
Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M
2014-01-01
We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use 13C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Proteins 2014; 82:195–215. © 2013 Wiley Periodicals, Inc. PMID:23818175
Semianalytic Satellite Theory (SST): Mathematical Algorithms
1994-01-01
orbital state of a satellite with an equinoctial element set (a,,. •a 6...applied to a wide variety of orbit element sets . The equinoctial elements were chosen for SST because the variational equations for the equinoctial ...Shaver, 1980]. 2.1.1 Definition of the Equinoctial Elements There are six elements in the equinoctial element set : a, = a sernimajor axis a2 = h a3 =
Weinreich, D M; Rand, D M
2000-01-01
We report that patterns of nonneutral DNA sequence evolution among published nuclear and mitochondrially encoded protein-coding loci differ significantly in animals. Whereas an apparent excess of amino acid polymorphism is seen in most (25/31) mitochondrial genes, this pattern is seen in fewer than half (15/36) of the nuclear data sets. This differentiation is even greater among data sets with significant departures from neutrality (14/15 vs. 1/6). Using forward simulations, we examined patterns of nonneutral evolution using parameters chosen to mimic the differences between mitochondrial and nuclear genetics (we varied recombination rate, population size, mutation rate, selective dominance, and intensity of germ line bottleneck). Patterns of evolution were correlated only with effective population size and strength of selection, and no single genetic factor explains the empirical contrast in patterns. We further report that in Arabidopsis thaliana, a highly self-fertilizing plant with effectively low recombination, five of six published nuclear data sets also exhibit an excess of amino acid polymorphism. We suggest that the contrast between nuclear and mitochondrial nonneutrality in animals stems from differences in rates of recombination in conjunction with a distribution of selective effects. If the majority of mutations segregating in populations are deleterious, high linkage may hinder the spread of the occasional beneficial mutation. PMID:10978302
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.
2016-04-15
Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less
Bread board float zone experiment system for high purity silicon
NASA Technical Reports Server (NTRS)
Kern, E. L.; Gill, G. L., Jr.
1982-01-01
A breadboard float zone experimental system has been established at Westech Systems for use by NASA in the float zone experimental area. A used zoner of suitable size and flexibility was acquired and installed with the necessary utilities. Repairs, alignments and modifications were made to provide for dislocation free zoning of silicon. The zoner is capable of studying process parameters used in growing silicon in gravity and is flexible to allow trying of new features that will test concepts of zoning in microgravity. Characterizing the state of the art molten zones of a growing silicon crystal will establish the data base against which improvements of zoning in gravity or growing in microgravity can be compared. 25 mm diameter was chosen as the reference size, since growth in microgravity will be at that diameter or smaller for about the next 6 years. Dislocation free crystals were growtn in the 100 and 111 orientations, using a wide set of growth conditions. The zone shape at one set of conditions was measured, by simultaneously aluminum doping and freezing the zone, lengthwise slabbing and delineating by etching. The whole set of crystals, grown under various conditions, were slabbed, polished and striation etched, revealing the growth interface shape and the periodic and aperiodic natures of the striations.
The non-contact heart rate measurement system for monitoring HRV.
Huang, Ji-Jer; Yu, Sheng-I; Syu, Hao-Yi; See, Aaron Raymond
2013-01-01
A noncontact ECG monitoring and analysis system was developed using capacitive-coupled device integrated to a home sofa. Electrodes were placed on the backrest of a sofa separated from the body with only the chair covering and the user's clothing. The study also incorporates measurements using different fabric materials, and a pure cotton material was chosen to cover the chair's backrest. The material was chosen to improve the signal to noise ratio. The system is initially implemented on a home sofa and is able to measure non-contact ECG through thin cotton clothing and perform heart rate analysis to calculate the heart rate variability (HRV) parameters. It was also tested under different conditions and results from reading and sleeping exhibited a stable ECG. Subsequently, results from our calculated HRV were found to be identical to those of a commercially available HRV analyzer. However, HRV parameters are easily affected by motion artifacts generated during drinking or eating with the latter producing a more severe disturbance. Lastly, parameters measured are saved on a cloud database, providing users with a long-term monitoring and recording for physiological information.
Oetjen, Janina; Lachmund, Delf; Palmer, Andrew; Alexandrov, Theodore; Becker, Michael; Boskamp, Tobias; Maass, Peter
2016-09-01
A standardized workflow for matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI imaging MS) is a prerequisite for the routine use of this promising technology in clinical applications. We present an approach to develop standard operating procedures for MALDI imaging MS sample preparation of formalin-fixed and paraffin-embedded (FFPE) tissue sections based on a novel quantitative measure of dataset quality. To cover many parts of the complex workflow and simultaneously test several parameters, experiments were planned according to a fractional factorial design of experiments (DoE). The effect of ten different experiment parameters was investigated in two distinct DoE sets, each consisting of eight experiments. FFPE rat brain sections were used as standard material because of low biological variance. The mean peak intensity and a recently proposed spatial complexity measure were calculated for a list of 26 predefined peptides obtained by in silico digestion of five different proteins and served as quality criteria. A five-way analysis of variance (ANOVA) was applied on the final scores to retrieve a ranking of experiment parameters with increasing impact on data variance. Graphical abstract MALDI imaging experiments were planned according to fractional factorial design of experiments for the parameters under study. Selected peptide images were evaluated by the chosen quality metric (structure and intensity for a given peak list), and the calculated values were used as an input for the ANOVA. The parameters with the highest impact on the quality were deduced and SOPs recommended.
NASA Astrophysics Data System (ADS)
Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.
2018-05-01
The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic Marmousi model example and a land seismic field data set from Hussar, Alberta, Canada, we confirm that the new inversion strategy suppresses the interparameter contamination effectively and provides more reliable density estimations in isotropic-elastic FWI as compared to standard simultaneous inversion approach.
Di Molfetta, A; Santini, L; Forleo, G B; Minni, V; Mafhouz, K; Della Rocca, D G; Fresiello, L; Romeo, F; Ferrari, G
2012-01-01
In spite of cardiac resynchronization therapy (CRT) benefits, 25-30% of patients are still non responders. One of the possible reasons could be the non optimal atrioventricular (AV) and interventricular (VV) intervals settings. Our aim was to exploit a numerical model of cardiovascular system for AV and VV intervals optimization in CRT. A numerical model of the cardiovascular system CRT-dedicated was previously developed. Echocardiographic parameters, Systemic aortic pressure and ECG were collected in 20 consecutive patients before and after CRT. Patient data were simulated by the model that was used to optimize and set into the device the intervals at the baseline and at the follow up. The optimal AV and VV intervals were chosen to optimize the simulated selected variable/s on the base of both echocardiographic and electrocardiographic parameters. Intervals were different for each patient and in most cases, they changed at follow up. The model can well reproduce clinical data as verified with Bland Altman analysis and T-test (p > 0.05). Left ventricular remodeling was 38.7% and left ventricular ejection fraction increasing was 11% against the 15% and 6% reported in literature, respectively. The developed numerical model could reproduce patients conditions at the baseline and at the follow up including the CRT effects. The model could be used to optimize AV and VV intervals at the baseline and at the follow up realizing a personalized and dynamic CRT. A patient tailored CRT could improve patients outcome in comparison to literature data.
Decision theory applied to image quality control in radiology.
Lessa, Patrícia S; Caous, Cristofer A; Arantes, Paula R; Amaro, Edson; de Souza, Fernando M Campello
2008-11-13
The present work aims at the application of the decision theory to radiological image quality control (QC) in diagnostic routine. The main problem addressed in the framework of decision theory is to accept or reject a film lot of a radiology service. The probability of each decision of a determined set of variables was obtained from the selected films. Based on a radiology service routine a decision probability function was determined for each considered group of combination characteristics. These characteristics were related to the film quality control. These parameters were also framed in a set of 8 possibilities, resulting in 256 possible decision rules. In order to determine a general utility application function to access the decision risk, we have used a simple unique parameter called r. The payoffs chosen were: diagnostic's result (correct/incorrect), cost (high/low), and patient satisfaction (yes/no) resulting in eight possible combinations. Depending on the value of r, more or less risk will occur related to the decision-making. The utility function was evaluated in order to determine the probability of a decision. The decision was made with patients or administrators' opinions from a radiology service center. The model is a formal quantitative approach to make a decision related to the medical imaging quality, providing an instrument to discriminate what is really necessary to accept or reject a film or a film lot. The method presented herein can help to access the risk level of an incorrect radiological diagnosis decision.
Calculation of the Initial Magnetic Field for Mercury's Magnetosphere Hybrid Model
NASA Astrophysics Data System (ADS)
Alexeev, Igor; Parunakian, David; Dyadechkin, Sergey; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku
2018-03-01
Several types of numerical models are used to analyze the interactions of the solar wind flow with Mercury's magnetosphere, including kinetic models that determine magnetic and electric fields based on the spatial distribution of charges and currents, magnetohydrodynamic models that describe plasma as a conductive liquid, and hybrid models that describe ions kinetically in collisionless mode and represent electrons as a massless neutralizing liquid. The structure of resulting solutions is determined not only by the chosen set of equations that govern the behavior of plasma, but also by the initial and boundary conditions; i.e., their effects are not limited to the amount of computational work required to achieve a quasi-stationary solution. In this work, we have proposed using the magnetic field computed by the paraboloid model of Mercury's magnetosphere as the initial condition for subsequent hybrid modeling. The results of the model have been compared to measurements performed by the Messenger spacecraft during a single crossing of the magnetosheath and the magnetosphere. The selected orbit lies in the terminator plane, which allows us to observe two crossings of the bow shock and the magnetopause. In our calculations, we have defined the initial parameters of the global magnetospheric current systems in a way that allows us to minimize paraboloid magnetic field deviation along the trajectory of the Messenger from the experimental data. We have shown that the optimal initial field parameters include setting the penetration of a partial interplanetary magnetic field into the magnetosphere with a penetration coefficient of 0.2.
Designing a podiatry service to meet the needs of the population: a service simulation.
Campbell, Jackie A
2007-02-01
A model of a podiatry service has been developed which takes into consideration the effect of changing access criteria, skill mix and staffing levels (among others) given fixed local staffing budgets and the foot-health characteristics of the local community. A spreadsheet-based deterministic model was chosen to allow maximum transparency of programming. This work models a podiatry service in England, but could be adapted for other settings and, with some modification, for other community-based services. This model enables individual services to see the effect on outcome parameters such as number of patients treated, number discharged and size of waiting lists of various service configurations, given their individual local data profile. The process of designing the model has also had spin-off benefits for the participants in making explicit many of the implicit rules used in managing their services.
Image smoothing and enhancement via min/max curvature flow
NASA Astrophysics Data System (ADS)
Malladi, Ravikanth; Sethian, James A.
1996-03-01
We present a class of PDE-based algorithms suitable for a wide range of image processing applications. The techniques are applicable to both salt-and-pepper gray-scale noise and full- image continuous noise present in black and white images, gray-scale images, texture images and color images. At the core, the techniques rely on a level set formulation of evolving curves and surfaces and the viscosity in profile evolution. Essentially, the method consists of moving the isointensity contours in an image under curvature dependent speed laws to achieve enhancement. Compared to existing techniques, our approach has several distinct advantages. First, it contains only one enhancement parameter, which in most cases is automatically chosen. Second, the scheme automatically stops smoothing at some optimal point; continued application of the scheme produces no further change. Third, the method is one of the fastest possible schemes based on a curvature-controlled approach.
Light nuclei of even mass number in the Skyrme model
NASA Astrophysics Data System (ADS)
Battye, R. A.; Manton, N. S.; Sutcliffe, P. M.; Wood, S. W.
2009-09-01
We consider the semiclassical rigid-body quantization of Skyrmion solutions of mass numbers B=4,6,8,10, and 12. We determine the allowed quantum states for each Skyrmion and find that they often match the observed states of nuclei. The spin and isospin inertia tensors of these Skyrmions are accurately calculated for the first time and are used to determine the excitation energies of the quantum states. We calculate the energy level splittings, using a suitably chosen parameter set for each mass number. We find good qualitative and encouraging quantitative agreement with experiment. In particular, the rotational bands of beryllium-8 and carbon-12, along with isospin 1 triplets and isospin 2 quintets, are especially well reproduced. We also predict the existence of states that have not yet been observed and make predictions for the unknown quantum numbers of some observed states.
The genetic code as a periodic table: algebraic aspects.
Bashford, J D; Jarvis, P D
2000-01-01
The systematics of indices of physico-chemical properties of codons and amino acids across the genetic code are examined. Using a simple numerical labelling scheme for nucleic acid bases, A=(-1,0), C=(0,-1), G=(0,1), U=(1,0), data can be fitted as low order polynomials of the six coordinates in the 64-dimensional codon weight space. The work confirms and extends the recent studies by Siemion et al. (1995. BioSystems 36, 231-238) of the conformational parameters. Fundamental patterns in the data such as codon periodicities, and related harmonics and reflection symmetries, are here associated with the structure of the set of basis monomials chosen for fitting. Results are plotted using the Siemion one-step mutation ring scheme, and variants thereof. The connections between the present work, and recent studies of the genetic code structure using dynamical symmetry algebras, are pointed out.
Quantum autoencoders for efficient compression of quantum data
NASA Astrophysics Data System (ADS)
Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan
2017-12-01
Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.
Li, Kun; Yu, Zhuang
2008-01-01
Urban heat islands are one of the most critical urban environment heat problems. Landsat ETM+ satellite data were used to investigate the land surface temperature and underlying surface indices such as NDVI and NDBI. A comparative study of the urban heat environment at different scales, times and locations was done to verify the heat island characteristics. Since remote sensing technology has limitations for dynamic flow analysis in the study of urban spaces, a CFD simulation was used to validate the improvement of the heat environment in a city by means of wind. CFD technology has its own shortcomings in parameter setting and verification, while RS technology is helpful to remedy this. The city of Wuhan and its climatological condition of being hot in summer and cold in winter were chosen to verify the comparative and combinative application of RS with CFD in studying the urban heat island. PMID:27873893
A Fatigue Measuring Protocol for Wireless Body Area Sensor Networks.
Akram, Sana; Javaid, Nadeem; Ahmad, Ashfaq; Khan, Zahoor Ali; Imran, Muhammad; Guizani, Mohsen; Hayat, Amir; Ilahi, Manzoor
2015-12-01
As players and soldiers preform strenuous exercises and do difficult and tiring duties, they are usually the common victims of muscular fatigue. Keeping this in mind, we propose FAtigue MEasurement (FAME) protocol for soccer players and soldiers using in-vivo sensors for Wireless Body Area Sensor Networks (WBASNs). In FAME, we introduce a composite parameter for fatigue measurement by setting a threshold level for each sensor. Whenever, any sensed data exceeds its threshold level, the players or soldiers are declared to be in a state of fatigue. Moreover, we use a vibration pad for the relaxation of fatigued muscles, and then utilize the vibrational energy by means of vibration detection circuit to recharge the in-vivo sensors. The induction circuit achieves about 68 % link efficiency. Simulation results show better performance of the proposed FAME protocol, in the chosen scenarios, as compared to an existing Wireless Soccer Team Monitoring (WSTM) protocol in terms of the selected metrics.
Nuclear Electric Vehicle Optimization Toolset (NEVOT)
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Kos, Larry D.; Qualls, A. Lou; Greene, Sherrell
2004-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major nuclear electric propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a genetic algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be considered through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Ground Vibration Generated by a Load Moving Along a Railway Track
NASA Astrophysics Data System (ADS)
SHENG, X.; JONES, C. J. C.; PETYT, M.
1999-11-01
The propagation of vibration generated by a harmonic or a constant load moving along a layered beam resting on the layered half-space is investigated theoretically in this paper. The solution to this problem can be used to study the ground vibration generated by the motion of a train axle load on a railway track. In this application, the ground is modelled as a number of parallel viscoelastic layers overlying an elastic half-space or a rigid foundation. The track, including the rails, rail pad, sleepers and ballast, is modelled as an infinite, layered beam structure. The modal nature of propagation in the ground for a chosen set of ground parameters is discussed and the results of the model are presented showing the characteristics of the vibration generated by a constant load and an oscillatory load at speeds below, near to, and above the lowest ground wave speed.
Optimization of supersonic axisymmetric nozzles with a center body for aerospace propulsion
NASA Astrophysics Data System (ADS)
Davidenko, D. M.; Eude, Y.; Falempin, F.
2011-10-01
This study is aimed at optimization of axisymmetric nozzles with a center body, which are suitable for thrust engines having an annular duct. To determine the flow conditions and nozzle dimensions, the Vinci rocket engine is chosen as a prototype. The nozzle contours are described by 2nd and 3rd order analytical functions and specified by a set of geometrical parameters. A direct optimization method is used to design maximum thrust nozzle contours. During optimization, the flow of multispecies reactive gas is simulated by an Euler code. Several optimized contours have been obtained for the center body diameter ranging from 0.2 to 0.4 m. For these contours, Navier-Stokes (NS) simulations have been performed to take into account viscous effects assuming adiabatic and cooled wall conditions. The paper presents an analysis of factors influencing the nozzle thrust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitelaw, R.W.
1987-01-01
The market research techniques available now to the electric utility industry have evolved over the last thirty years into a set of sophisticated tools that permit complex behavioral analyses that earlier had been impossible. The marketing questions facing the electric utility industry now are commensurately more complex than ever before. This document was undertaken to present the tools and techniques needed to start or improve the usefulness of market research activities within electric utilities. It describes proven planning and management techniques as well as decision criteria for structuring effective market research functions for each utility's particular needs. The monograph establishesmore » the parameters of sound utility market research given trade-offs between highly centralized or decentralized organizations, research focus, involvement in decision making, and personnel and management skills necessary to maximize the effectiveness of the structure chosen.« less
Research on output feedback control
NASA Technical Reports Server (NTRS)
Calise, A. J.; Kramer, F. S.
1985-01-01
In designing fixed order compensators, an output feedback formulation has been adopted by suitably augmenting the system description to include the compensator states. However, the minimization of the performance index over the range of possible compensator descriptions was impeded due to the nonuniqueness of the compensator transfer function. A controller canonical form of the compensator was chosen to reduce the number of free parameters to its minimal number in the optimization. In the MIMO case, the controller form requires a prespecified set of ascending controllability indices. This constraint on the compensator structure is rather innocuous in relation to the increase in convergence rate of the optimization. Moreover, the controller form is easily relatable to a unique controller transfer function description. This structure of the compensator does not require penalizing the compensator states for a nonzero or coupled solution, a problem that occurs when following a standard output feedback synthesis formulation.
Monitoring methods and predictive models for water status in Jonathan apples.
Trincă, Lucia Carmen; Căpraru, Adina-Mirela; Arotăriţei, Dragoş; Volf, Irina; Chiruţă, Ciprian
2014-02-01
Evaluation of water status in Jonathan apples was performed for 20 days. Loss moisture content (LMC) was carried out through slow drying of wholes apples and the moisture content (MC) was carried out through oven drying and lyophilisation for apple samples (chunks, crushed and juice). We approached a non-destructive method to evaluate LMC and MC of apples using image processing and multilayer neural networks (NN) predictor. We proposed a new simple algorithm that selects the texture descriptors based on initial set heuristically chosen. Both structure and weights of NN are optimised by a genetic algorithm with variable length genotype that led to a high precision of the predictive model (R(2)=0.9534). In our opinion, the developing of this non-destructive method for the assessment of LMC and MC (and of other chemical parameters) seems to be very promising in online inspection of food quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modelling of current loads on aquaculture net cages
NASA Astrophysics Data System (ADS)
Kristiansen, Trygve; Faltinsen, Odd M.
2012-10-01
In this paper we propose and discuss a screen type of force model for the viscous hydrodynamic load on nets. The screen model assumes that the net is divided into a number of flat net panels, or screens. It may thus be applied to any kind of net geometry. In this paper we focus on circular net cages for fish farms. The net structure itself is modelled by an existing truss model. The net shape is solved for in a time-stepping procedure that involves solving a linear system of equations for the unknown tensions at each time step. We present comparisons to experiments with circular net cages in steady current, and discuss the sensitivity of the numerical results to a set of chosen parameters. Satisfactory agreement between experimental and numerical prediction of drag and lift as function of the solidity ratio of the net and the current velocity is documented.
Optimization of operator and physical parameters for laser welding of dental materials.
Bertrand, C; le Petitcorps, Y; Albingre, L; Dupuis, V
2004-04-10
Interactions between lasers and materials are very complex phenomena. The success of laser welding procedures in dental metals depends on the operator's control of many parameters. The aims of this study were to evaluate factors relating to the operator's dexterity and the choice of the welding parameters (power, pulse duration and therefore energy), which are recognized determinants of weld quality. In vitro laboratory study. FeNiCr dental drawn wires were chosen for these experiments because their properties are well known. Different diameters of wires were laser welded, then tested in tension and compared to the control material as extruded, in order to evaluate the quality of the welding. Scanning electron microscopy of the fractured zone and micrograph observations perpendicular and parallel to the wire axis were also conducted in order to analyse the depth penetration and the quality of the microstructure. Additionally, the micro-hardness (Vickers type) was measured both in the welded and the heat-affected zones and then compared to the non-welded alloy. Adequate combination of energy and pulse duration with the power set in the range between 0.8 to 1 kW appears to improve penetration depth of the laser beam and success of the welding procedure. Operator skill is also an important variable. The variation in laser weld quality in dental FeNiCr wires attributable to operator skill can be minimized by optimization of the physical welding parameters.
Development of a Thermodynamic Model for the Hanford Tank Waste Operations Simulator - 12193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Robert; Seniow, Kendra
The Hanford Tank Waste Operations Simulator (HTWOS) is the current tool used by the Hanford Tank Operations Contractor for system planning and assessment of different operational strategies. Activities such as waste retrievals in the Hanford tank farms and washing and leaching of waste in the Waste Treatment and Immobilization Plant (WTP) are currently modeled in HTWOS. To predict phase compositions during these activities, HTWOS currently uses simple wash and leach factors that were developed many years ago. To improve these predictions, a rigorous thermodynamic framework has been developed based on the multi-component Pitzer ion interaction model for use with severalmore » important chemical species in Hanford tank waste. These chemical species are those with the greatest impact on high-level waste glass production in the WTP and whose solubility depends on the processing conditions. Starting with Pitzer parameter coefficients and species chemical potential coefficients collated from open literature sources, reconciliation with published experimental data led to a self-consistent set of coefficients known as the HTWOS Pitzer database. Using Gibbs energy minimization with the Pitzer ion interaction equations in Microsoft Excel,1 a number of successful predictions were made for the solubility of simple mixtures of the chosen species. Currently, this thermodynamic framework is being programmed into HTWOS as the mechanism for determining the solid-liquid phase distributions for the chosen species, replacing their simple wash and leach factors. Starting from a variety of open literature sources, a collection of Pitzer parameters and species chemical potentials, as functions of temperature, was tested for consistency and accuracy by comparison with available experimental thermodynamic data (e.g., osmotic coefficients and solubility). Reconciliation of the initial set of parameter coefficients with the experimental data led to the development of the self-consistent set known as the HTWOS Pitzer database. Using Microsoft Excel to formulate the Gibbs energy minimization method and the multi-component Pitzer ion interaction equations, several predictions of the solubility of solute mixtures at various temperatures were made using the HTWOS Pitzer database coefficients. Examples of these predictions are shown in Figure 3 and Figure 4. A listing of the entire HTWOS Pitzer database can be found in RPP-RPT-50703. Currently, work is underway to install the Pitzer ion interaction model in HTWOS as the mechanism for determining the solid-liquid phase distributions of select waste constituents during tank retrievals and subsequent washing and leaching of the waste. Validation of the Pitzer ion interaction model in HTWOS will be performed with analytical laboratory data of actual tank waste. This change in HTWOS is expected to elicit shifts in mission criteria, such as mission end date and quantity of high-level waste glass produced by WTP, as predicted by HTWOS. These improvements to the speciation calculations in HTWOS, however, will establish a better planning basis and facilitate more effective and efficient future operations of the WTP. (authors)« less
Crashworthiness studies of locomotive wide nose short hood designs
DOT National Transportation Integrated Search
1999-11-01
This paper investigates the parameters that influence the structural response of typical wide nose locomotive short hoods involved in offset collisions. This accident scenario was chosen based upon the railway collision that occurred in Selma, North ...
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
Image Restoration for Fluorescence Planar Imaging with Diffusion Model
Gong, Yuzhu; Li, Yang
2017-01-01
Fluorescence planar imaging (FPI) is failure to capture high resolution images of deep fluorochromes due to photon diffusion. This paper presents an image restoration method to deal with this kind of blurring. The scheme of this method is conceived based on a reconstruction method in fluorescence molecular tomography (FMT) with diffusion model. A new unknown parameter is defined through introducing the first mean value theorem for definite integrals. System matrix converting this unknown parameter to the blurry image is constructed with the elements of depth conversion matrices related to a chosen plane named focal plane. Results of phantom and mouse experiments show that the proposed method is capable of reducing the blurring of FPI image caused by photon diffusion when the depth of focal plane is chosen within a proper interval around the true depth of fluorochrome. This method will be helpful to the estimation of the size of deep fluorochrome. PMID:29279843
Optimisation of confinement in a fusion reactor using a nonlinear turbulence model
NASA Astrophysics Data System (ADS)
Highcock, E. G.; Mandell, N. R.; Barnes, M.
2018-04-01
The confinement of heat in the core of a magnetic fusion reactor is optimised using a multidimensional optimisation algorithm. For the first time in such a study, the loss of heat due to turbulence is modelled at every stage using first-principles nonlinear simulations which accurately capture the turbulent cascade and large-scale zonal flows. The simulations utilise a novel approach, with gyrofluid treatment of the small-scale drift waves and gyrokinetic treatment of the large-scale zonal flows. A simple near-circular equilibrium with standard parameters is chosen as the initial condition. The figure of merit, fusion power per unit volume, is calculated, and then two control parameters, the elongation and triangularity of the outer flux surface, are varied, with the algorithm seeking to optimise the chosen figure of merit. A twofold increase in the plasma power per unit volume is achieved by moving to higher elongation and strongly negative triangularity.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
How to Select the most Relevant Roughness Parameters of a Surface: Methodology Research Strategy
NASA Astrophysics Data System (ADS)
Bobrovskij, I. N.
2018-01-01
In this paper, the foundations for new methodology creation which provides solving problem of surfaces structure new standards parameters huge amount conflicted with necessary actual floors quantity of surfaces structure parameters which is related to measurement complexity decreasing are considered. At the moment, there is no single assessment of the importance of a parameters. The approval of presented methodology for aerospace cluster components surfaces allows to create necessary foundation, to develop scientific estimation of surfaces texture parameters, to obtain material for investigators of chosen technological procedure. The methods necessary for further work, the creation of a fundamental reserve and development as a scientific direction for assessing the significance of microgeometry parameters are selected.
Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.
2018-04-01
In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
Challenges of Developing Design Discharge Estimates with Uncertain Data and Information
NASA Astrophysics Data System (ADS)
Senarath, S. U. S.
2016-12-01
This study focuses on design discharge estimates obtained for gauged basins through flood flow frequency analysis. Bulletin 17B (B17B) guidelines are widely used in the USA for developing these design estimates, which are required for many water resources engineering design applications. A set of outlier and historical data, and distribution parameter selection options is included in these guidelines. These options are provided in the guidelines as a means of accounting for uncertain data and information, primarily in the flow record. The individual as well as the cumulative effects of each of these preferences on design discharge estimates are evaluated in this study by using data from several gauges that are part of the United States Geological Survey's Hydro-Climatic Data Network. The results of this study show that despite the availability of rigorous and detailed guidelines for flood frequency analysis, the design discharge estimates can still vary substantially, from user to user, based on data and model parameter selection options chosen by each user. Thus, the findings of this study have strong implications for water resources engineers and other professionals who use B17B-based design discharge estimates in their work.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
A Study on the Requirements for Fast Active Turbine Tip Clearance Control Systems
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.; Melcher, Kevin J.
2004-01-01
This paper addresses the requirements of a control system for active turbine tip clearance control in a generic commercial turbofan engine through design and analysis. The control objective is to articulate the shroud in the high pressure turbine section in order to maintain a certain clearance set point given several possible engine transient events. The system must also exhibit reasonable robustness to modeling uncertainties and reasonable noise rejection properties. Two actuators were chosen to fulfill such a requirement, both of which possess different levels of technological readiness: electrohydraulic servovalves and piezoelectric stacks. Identification of design constraints, desired actuator parameters, and actuator limitations are addressed in depth; all of which are intimately tied with the hardware and controller design process. Analytical demonstrations of the performance and robustness characteristics of the two axisymmetric LQG clearance control systems are presented. Takeoff simulation results show that both actuators are capable of maintaining the clearance within acceptable bounds and demonstrate robustness to parameter uncertainty. The present model-based control strategy was employed to demonstrate the tradeoff between performance, control effort, and robustness and to implement optimal state estimation in a noisy engine environment with intent to eliminate ad hoc methods for designing reliable control systems.
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
Quantum Discord Determines the Interferometric Power of Quantum States
NASA Astrophysics Data System (ADS)
Girolami, Davide; Souza, Alexandre M.; Giovannetti, Vittorio; Tufarelli, Tommaso; Filgueiras, Jefferson G.; Sarthour, Roberto S.; Soares-Pinto, Diogo O.; Oliveira, Ivan S.; Adesso, Gerardo
2014-05-01
Quantum metrology exploits quantum mechanical laws to improve the precision in estimating technologically relevant parameters such as phase, frequency, or magnetic fields. Probe states are usually tailored to the particular dynamics whose parameters are being estimated. Here we consider a novel framework where quantum estimation is performed in an interferometric configuration, using bipartite probe states prepared when only the spectrum of the generating Hamiltonian is known. We introduce a figure of merit for the scheme, given by the worst-case precision over all suitable Hamiltonians, and prove that it amounts exactly to a computable measure of discord-type quantum correlations for the input probe. We complement our theoretical results with a metrology experiment, realized in a highly controllable room-temperature nuclear magnetic resonance setup, which provides a proof-of-concept demonstration for the usefulness of discord in sensing applications. Discordant probes are shown to guarantee a nonzero phase sensitivity for all the chosen generating Hamiltonians, while classically correlated probes are unable to accomplish the estimation in a worst-case setting. This work establishes a rigorous and direct operational interpretation for general quantum correlations, shedding light on their potential for quantum technology.
Performance assessment of solid state actuators through a common procedure and comparison criteria
NASA Astrophysics Data System (ADS)
Reithler, Livier; Guedra-Degeorges, Didier
1998-07-01
The design of systems based on smart structure technologies for active shape and vibration control and high precision positioning requires a good knowledge of the behavior of the active materials (electrostrictive and piezoelectric ceramics and polymers, magnetostrictive and shape memory alloys...) and of commercially available actuators. Extensive theoretical studies have been made on the behavior of active materials during the past decades but there are only a few developments on experimental comparisons between different kinds of commercially available actuators. The purpose of this study is to find out the pertinent parameters for the design of such systems, to set up a common static test procedure for all types of actuators and to define comparison criteria in terms of output force and displacement, mechanical and electrical energy, mass and dimensions. After having define the pertinent parameters of the characterization and having described the resulting testing procedure, test results are presented for different types of actuators based on piezoceramics and magnetostrictive alloys. The performances of each actuator are compared through both the test results and the announced characteristics: to perform this comparison absolute and relative criteria are chosen considering aeronautical and space applications.
To image analysis in computed tomography
NASA Astrophysics Data System (ADS)
Chukalina, Marina; Nikolaev, Dmitry; Ingacheva, Anastasia; Buzmakov, Alexey; Yakimchuk, Ivan; Asadchikov, Victor
2017-03-01
The presence of errors in tomographic image may lead to misdiagnosis when computed tomography (CT) is used in medicine, or the wrong decision about parameters of technological processes when CT is used in the industrial applications. Two main reasons produce these errors. First, the errors occur on the step corresponding to the measurement, e.g. incorrect calibration and estimation of geometric parameters of the set-up. The second reason is the nature of the tomography reconstruction step. At the stage a mathematical model to calculate the projection data is created. Applied optimization and regularization methods along with their numerical implementations of the method chosen have their own specific errors. Nowadays, a lot of research teams try to analyze these errors and construct the relations between error sources. In this paper, we do not analyze the nature of the final error, but present a new approach for the calculation of its distribution in the reconstructed volume. We hope that the visualization of the error distribution will allow experts to clarify the medical report impression or expert summary given by them after analyzing of CT results. To illustrate the efficiency of the proposed approach we present both the simulation and real data processing results.
Liang, Jiafeng; Lin, Huiyan; Xiang, Jing; Wu, Hao; Li, Xu; Liang, Hongyu; Zheng, Xue
2015-04-01
Existing literature on the mini-ultimatum game indicates that counterfactual comparison between chosen and unchosen alternatives is of great importance for individual's fairness consideration. However, it is still unclear how counterfactual comparison influences the electrophysiological responses to unfair chosen offers. In conjunction with event-related potentials' (ERPs) technique, the current study aimed to explore the issue by employing a modified version of the mini-ultimatum game where a fixed set of two alternatives (unfair offer vs. fair alternative, unfair vs. hyperfair alternative, unfair offer vs. hyperunfair alternative) was presented before the chosen offer. The behavioral results showed that participants were more likely to accept unfair chosen offers when the unchosen alternative was hyperunfair than when the unchosen alternative was fair or hyperfair. The ERPs results showed that the feedback-related negativity (FRN) elicited by unfair chosen offers was insensitive to the type of unchosen alternative when correcting for possible overlap with other components. In contrast, unfair chosen offers elicited larger P300 amplitudes when the unchosen alternative was hyperunfair than when the unchosen alternative was fair or hyperfair. These findings suggest that counterfactual comparison may take effect at later stages of fairness consideration as reflected by the P300. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Time-varying parameter models for catchments with land use change: the importance of model structure
NASA Astrophysics Data System (ADS)
Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid
2018-05-01
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes FORTRAN program dParFit, which performs least-squares fits of diatomic molecule spectroscopic data involving one or more electronic states and one or more isotopologues, to parameterized expressions for the level energies. The data may consist of any combination of microwave, infrared or electronic vibrotational bands, fluorescence series or binding energies (from photo-association spectroscopy). The level energies for each electronic state may be described by one of: (i) band constants {Gv ,Bv ,Dv , … } for each vibrational level, (ii) generalized Dunham expansions, (iii) pure near-dissociation expansions (NDEs), (iv) mixed Dunham/NDE expressions, or (v) individual term values for each distinct level of each isotopologue. Different representations may be used for different electronic states and/or for different types of constants in a given fit (e.g., Gv and Bv may be represented one way and centrifugal distortion constants another). The effect of Λ-doubling or 2Σ splittings may be represented either by band constants (qvB or γvB, qvD or γvD, etc.) for each vibrational level of each isotopologue, or by using power series expansions in (v + 1/2) to represent those constants. Fits to Dunham or NDE expressions automatically incorporate normal first-order semiclassical mass scaling to allow combined analyses of multi-isotopologue data. In addition, dParFit may fit to determine atomic-mass-dependent terms required to account for breakdown of the Born-Oppenheimer and first-order semiclassical approximations. In any of these types of fits, one or more subsets of these parameters for one or more of the electronic states may be held fixed, while a limited parameter set is varied. The program can also use a set of read-in constants to make predictions and calculate deviations [ycalc -yobs ] for any chosen input data set, or to generate predictions of arbitrary data sets.
NASA Astrophysics Data System (ADS)
Cho, Hyun-chong; Hadjiiski, Lubomir; Sahiner, Berkman; Chan, Heang-Ping; Paramagul, Chintana; Helvie, Mark; Nees, Alexis V.
2012-03-01
We designed a Content-Based Image Retrieval (CBIR) Computer-Aided Diagnosis (CADx) system to assist radiologists in characterizing masses on ultrasound images. The CADx system retrieves masses that are similar to a query mass from a reference library based on computer-extracted features that describe texture, width-to-height ratio, and posterior shadowing of a mass. Retrieval is performed with k nearest neighbor (k-NN) method using Euclidean distance similarity measure and Rocchio relevance feedback algorithm (RRF). In this study, we evaluated the similarity between the query and the retrieved masses with relevance feedback using our interactive CBIR CADx system. The similarity assessment and feedback were provided by experienced radiologists' visual judgment. For training the RRF parameters, similarities of 1891 image pairs obtained from 62 masses were rated by 3 MQSA radiologists using a 9-point scale (9=most similar). A leave-one-out method was used in training. For each query mass, 5 most similar masses were retrieved from the reference library using radiologists' similarity ratings, which were then used by RRF to retrieve another 5 masses for the same query. The best RRF parameters were chosen based on three simulated observer experiments, each of which used one of the radiologists' ratings for retrieval and relevance feedback. For testing, 100 independent query masses on 100 images and 121 reference masses on 230 images were collected. Three radiologists rated the similarity between the query and the computer-retrieved masses. Average similarity ratings without and with RRF were 5.39 and 5.64 on the training set and 5.78 and 6.02 on the test set, respectively. The average Az values without and with RRF were 0.86+/-0.03 and 0.87+/-0.03 on the training set and 0.91+/-0.03 and 0.90+/-0.03 on the test set, respectively. This study demonstrated that RRF improved the similarity of the retrieved masses.
Analyses of scattering characteristics of chosen anthropogenic aerosols
NASA Astrophysics Data System (ADS)
Kaszczuk, Miroslawa; Mierczyk, Zygmunt; Muzal, Michal
2008-10-01
In the work, analyses of scattering profile of chosen anthropogenic aerosols for two wavelengths (λ1 = 1064 nm and λ2 = 532 nm) were made. As an example of anthropogenic aerosol three different pyrotechnic mixtures (DM11, M2, M16) were taken. Main parameters of smoke particles were firstly analyzed and well described, taking particle shape and size into special consideration. Shape of particles was analyzed on the basis of SEM pictures, and particle size was measured. Participation of particles in each fixed fraction characterized by range of sizes was analyzed and parameters of smoke particles of characteristic sizes and function describing aerosol size distribution (ASD) were determinated. Analyses of scattering profiles were carried out on the basis of both model of scattering on spherical and nonspherical particles. In the case of spherical particles Rayleigh-Mie model was used and for nonspherical particles analyses firstly model of spheroids was used, and then Rayleigh-Mie one. For each characteristic particle one calculated value of four parameters (effective scattering cross section σSCA, effective backscattering cross section σBSCA, scattering efficiency QSCA, backscattering efficiency QBSCA) and value of backscattering coefficient β for whole particles population. Obtained results were compared with the same parameters calculated for natural aerosol (cirrus cloud).
NASA Astrophysics Data System (ADS)
Ramadhani, T.; Hertono, G. F.; Handari, B. D.
2017-07-01
The Multiple Traveling Salesman Problem (MTSP) is the extension of the Traveling Salesman Problem (TSP) in which the shortest routes of m salesmen all of which start and finish in a single city (depot) will be determined. If there is more than one depot and salesmen start from and return to the same depot, then the problem is called Fixed Destination Multi-depot Multiple Traveling Salesman Problem (MMTSP). In this paper, MMTSP will be solved using the Ant Colony Optimization (ACO) algorithm. ACO is a metaheuristic optimization algorithm which is derived from the behavior of ants in finding the shortest route(s) from the anthill to a form of nourishment. In solving the MMTSP, the algorithm is observed with respect to different chosen cities as depots and non-randomly three parameters of MMTSP: m, K, L, those represents the number of salesmen, the fewest cities that must be visited by a salesman, and the most number of cities that can be visited by a salesman, respectively. The implementation is observed with four dataset from TSPLIB. The results show that the different chosen cities as depots and the three parameters of MMTSP, in which m is the most important parameter, affect the solution.
The Art and Science of Climate Model Tuning
Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew; ...
2017-03-31
The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less
Edenharter, Günther M; Gartner, Daniel; Pförringer, Dominik
2017-06-01
Increasing costs of material resources challenge hospitals to stay profitable. Particularly in anesthesia departments and intensive care units, bronchoscopes are used for various indications. Inefficient management of single- and multiple-use systems can influence the hospitals' material costs substantially. Using mathematical modeling, we developed a strategic decision support tool to determine the optimum mix of disposable and reusable bronchoscopy devices in the setting of an intensive care unit. A mathematical model with the objective to minimize costs in relation to demand constraints for bronchoscopy devices was formulated. The stochastic model decides whether single-use, multi-use, or a strategically chosen mix of both device types should be used. A decision support tool was developed in which parameters for uncertain demand such as mean, standard deviation, and a reliability parameter can be inserted. Furthermore, reprocessing costs per procedure, procurement, and maintenance costs for devices can be parameterized. Our experiments show for which demand pattern and reliability measure, it is efficient to only use reusable or disposable devices and under which circumstances the combination of both device types is beneficial. To determine the optimum mix of single-use and reusable bronchoscopy devices effectively and efficiently, managers can enter their hospital-specific parameters such as demand and prices into the decision support tool.The software can be downloaded at: https://github.com/drdanielgartner/bronchomix/.
The Art and Science of Climate Model Tuning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hourdin, Frederic; Mauritsen, Thorsten; Gettelman, Andrew
The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. In climate models, the variety and complexity of physical processes involved, and their interplay through a wide range of spatial and temporal scales, must be summarized in a series of approximate submodels. Most submodels depend on uncertain parameters. Tuning consists of adjusting the values of these parameters to bring the solution as a whole into line with aspects of the observed climate. Tuning is an essential aspect of climate modeling withmore » its own scientific issues, which is probably not advertised enough outside the community of model developers. Optimization of climate models raises important questions about whether tuning methods a priori constrain the model results in unintended ways that would affect our confidence in climate projections. Here, we present the definition and rationale behind model tuning, review specific methodological aspects, and survey the diversity of tuning approaches used in current climate models. We also discuss the challenges and opportunities in applying so-called objective methods in climate model tuning. Here, we discuss how tuning methodologies may affect fundamental results of climate models, such as climate sensitivity. The article concludes with a series of recommendations to make the process of climate model tuning more transparent.« less
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
NASA Technical Reports Server (NTRS)
Kavaya, Michael J.
2008-01-01
Over 20 years of investigation by NASA and NOAA scientists and Doppler lidar technologists into a global wind profiling mission from earth orbit have led to the current favored concept of an instrument with both coherent- and direct-detection pulsed Doppler lidars (i.e., a hybrid Doppler lidar) and a stepstare beam scanning approach covering several azimuth angles with a fixed nadir angle. The nominal lidar wavelengths are 2 microns for coherent detection, and 0.355 microns for direct detection. The two agencies have also generated two sets of sophisticated wind measurement requirements for a space mission: science demonstration requirements and operational requirements. The requirements contain the necessary details to permit mission design and optimization by lidar technologists. Simulations have been developed that connect the science requirements to the wind measurement requirements, and that connect the wind measurement requirements to the Doppler lidar parameters. The simulations also permit trade studies within the multi-parameter space. These tools, combined with knowledge of the state of the Doppler lidar technology, have been used to conduct space instrument and mission design activities to validate the feasibility of the chosen mission and lidar parameters. Recently, the NRC Earth Science Decadal Survey recommended the wind mission to NASA as one of 15 recommended missions. A full description of the wind measurement product from these notional missions and the possible trades available are presented in this paper.
Eisenstecken, Daniela; Panarese, Alessia; Robatscher, Peter; Huck, Christian W; Zanella, Angelo; Oberhuber, Michael
2015-07-24
The potential of near infrared spectroscopy (NIRS) in the wavelength range of 1000-2500 nm for predicting quality parameters such as total soluble solids (TSS), acidity (TA), firmness, and individual sugars (glucose, fructose, sucrose, and xylose) for two cultivars of apples ("Braeburn" and "Cripps Pink") was studied during the pre- and post-storage periods. Simultaneously, a qualitative investigation on the capability of NIRS to discriminate varieties, harvest dates, storage periods and fruit inhomogeneity was carried out. In order to generate a sample set with high variability within the most relevant apple quality traits, three different harvest time points in combination with five different storage periods were chosen, and the evolution of important quality parameters was followed both with NIRS and wet chemical methods. By applying a principal component analysis (PCA) a differentiation between the two cultivars, freshly harvested vs. long-term stored apples and, notably, between the sun-exposed vs. shaded side of apples could be found. For the determination of quality parameters effective prediction models for titratable acid (TA) and individual sugars such as fructose, glucose and sucrose by using partial least square (PLS) regression have been developed. Our results complement earlier reports, highlighting the versatility of NIRS as a fast, non-invasive method for quantitative and qualitative studies on apples.
Linear Transceiver Design for Interference Alignment: Complexity and Computation
2010-07-01
restriction on the choice of beamforming vector of node b. Thus, for any fixed transmit node b in H , there are multiple restriction sets, each...signal space can be chosen. The receive nodes in H can achieve interference alignment if and only if these restricted sets of one-dimensional signal...total number of restriction sets is at most linear in the number of edges in H and each restriction set contains at most two one-dimensional
Theory of a Traveling Wave Feed for a Planar Slot Array Antenna
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam
2012-01-01
Planar arrays of waveguide-fed slots have been employed in many radar and remote sensing applications. Such arrays are designed in the standing wave configuration because of high efficiency. Traveling wave arrays can produce greater bandwidth at the expense of efficiency due to power loss in the load or loads. Traveling wave planar slot arrays may be designed with a long feed waveguide consisting of centered-inclined coupling slots. The feed waveguide is terminated in a matched load, and the element spacing in the feed waveguide is chosen to produce a beam squinted from the broadside. The traveling wave planar slot array consists of a long feed waveguide containing resonant-centered inclined coupling slots in the broad wall, coupling power into an array of stacked radiating waveguides orthogonal to it. The radiating waveguides consist of longitudinal offset radiating slots in a standing wave configuration. For the traveling wave feed of a planar slot array, one has to design the tilt angle and length of each coupling slot such that the amplitude and phase of excitation of each radiating waveguide are close to the desired values. The coupling slot spacing is chosen for an appropriate beam squint. Scattering matrix parameters of resonant coupling slots are used in the design process to produce appropriate excitations of radiating waveguides with constraints placed only on amplitudes. Since the radiating slots in each radiating waveguide are designed to produce a certain total admittance, the scattering (S) matrix of each coupling slot is reduced to a 2x2 matrix. Elements of each 2x2 S-matrix and the amount of coupling into the corresponding radiating waveguide are expressed in terms of the element S11. S matrices are converted into transmission (T) matrices, and the T matrices are multiplied to cascade the coupling slots and waveguide sections, starting from the load end and proceeding towards the source. While the use of non-resonant coupling slots may provide an additional degree of freedom in the design, resonant coupling slots simplify the design process. The amplitude of the wave going to the load is set at unity. The S11 parameter, r of the coupling slot closest to the load, is assigned an arbitrary value. A larger value of r will reduce the power dissipated in the load while increasing the reflection coefficient at the input port. It is now possible to obtain the excitation of the radiating waveguide closest to the load and the coefficients of the wave incident and reflected at the input port of this coupling slot. The next coupling slot parameter, r , is chosen to realize the excitation of that radiating waveguide. One continues this process moving towards the source, until all the coupling slot parameters r and hence the S11 parameter of the 4-port coupler, r, are known for each coupling slot. The goal is to produce the desired array aperture distribution in the feed direction. From an interpolation of the computed moment method data for the slot parameters, all the coupling slot tilt angles and lengths are obtained. From the excitations of the radiating waveguides computed from the coupling values, radiating slot parameters may be obtained so as to attain the desired total normalized slot admittances. This process yields the radiating slot parameters, offsets, and lengths. The design is repeated by choosing different values of r for the last coupling slot until the percentage of power dissipated in the load and the input reflection coefficient values are satisfactory. Numerical results computed for the radiation pattern, the tilt angles and lengths of coupling slots, and excitation phases of the radiating waveguides, are presented for an array with uniform amplitude excitation. The design process has been validated using computer simulations. This design procedure is valid for non-uniform amplitude excitations as well.
NASA Astrophysics Data System (ADS)
Cipriano, F. R.; Lagmay, A. M. A.; Horritt, M.; Mendoza, J.; Sabio, G.; Punay, K. N.; Taniza, H. J.; Uichanco, C.
2015-12-01
Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are just some of the damages caused by flooding and the Philippine government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps with accurate output, different input parameters were needed and one of those is calculating hydrological components from topographical data. This paper presents how a calibrated lag time (TL) equation was obtained using measurable catchment parameters. Lag time is an essential input in flood mapping and is defined as the duration between the peak rainfall and peak discharge of the watershed. The lag time equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S) derived from the curve number, and watershed slope (Y), all of which were available from RADARSAT Digital Elevation Models (DEM). This approach was based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Rainfall data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the actual lag time. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. The actual lag time values were plotted against the values obtained from the Natural Resource Conservation Management handbook lag time equation. Regression analysis was used to obtain the final calibrated equation that would be used to calculate the lag time specifically for rivers in the Philippine setting. The calculated lag time values could then be used as a parameter for modeling different flood scenarios in the country.
NASA Technical Reports Server (NTRS)
Graf, Wiley E.
1991-01-01
A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.
COACH: profile-profile alignment of protein families using hidden Markov models.
Edgar, Robert C; Sjölander, Kimmen
2004-05-22
Alignments of two multiple-sequence alignments, or statistical models of such alignments (profiles), have important applications in computational biology. The increased amount of information in a profile versus a single sequence can lead to more accurate alignments and more sensitive homolog detection in database searches. Several profile-profile alignment methods have been proposed and have been shown to improve sensitivity and alignment quality compared with sequence-sequence methods (such as BLAST) and profile-sequence methods (e.g. PSI-BLAST). Here we present a new approach to profile-profile alignment we call Comparison of Alignments by Constructing Hidden Markov Models (HMMs) (COACH). COACH aligns two multiple sequence alignments by constructing a profile HMM from one alignment and aligning the other to that HMM. We compare the alignment accuracy of COACH with two recently published methods: Yona and Levitt's prof_sim and Sadreyev and Grishin's COMPASS. On two sets of reference alignments selected from the FSSP database, we find that COACH is able, on average, to produce alignments giving the best coverage or the fewest errors, depending on the chosen parameter settings. COACH is freely available from www.drive5.com/lobster
Intersubjective decision-making for computer-aided forging technology design
NASA Astrophysics Data System (ADS)
Kanyukov, S. I.; Konovalov, A. V.; Muizemnek, O. Yu.
2017-12-01
We propose a concept of intersubjective decision-making for problems of open-die forging technology design. The intersubjective decisions are chosen from a set of feasible decisions using the fundamentals of the decision-making theory in fuzzy environment according to the Bellman-Zadeh scheme. We consider the formalization of subjective goals and the choice of membership functions for the decisions depending on subjective goals. We study the arrangement of these functions into an intersubjective membership function. The function is constructed for a resulting decision, which is chosen from a set of feasible decisions. The choice of the final intersubjective decision is discussed. All the issues are exemplified by a specific technological problem. The considered concept of solving technological problems under conditions of fuzzy goals allows one to choose the most efficient decisions from a set of feasible ones. These decisions correspond to the stated goals. The concept allows one to reduce human participation in automated design. This concept can be used to develop algorithms and design programs for forging numerous types of forged parts.
2013-01-01
Purpose Retrospective analysis of 3D clinical treatment plans to investigate qualitative, possible, clinical consequences of the use of PBC versus AAA. Methods The 3D dose distributions of 80 treatment plans at four different tumour sites, produced using PBC algorithm, were recalculated using AAA and the same number of monitor units provided by PBC and clinically delivered to each patient; the consequences of the difference on the dose-effect relations for normal tissue injury were studied by comparing different NTCP model/parameters extracted from a review of published studies. In this study the AAA dose calculation is considered as benchmark data. The paired Student t-test was used for statistical comparison of all results obtained from the use of the two algorithms. Results In the prostate plans, the AAA predicted lower NTCP value (NTCPAAA) for the risk of late rectal bleeding for each of the seven combinations of NTCP parameters, the maximum mean decrease was 2.2%. In the head-and-neck treatments, each combination of parameters used for the risk of xerostemia from irradiation of the parotid glands involved lower NTCPAAA, that varied from 12.8% (sd=3.0%) to 57.5% (sd=4.0%), while when the PBC algorithm was used the NTCPPBC’s ranging was from 15.2% (sd=2.7%) to 63.8% (sd=3.8%), according the combination of parameters used; the differences were statistically significant. Also NTCPAAA regarding the risk of radiation pneumonitis in the lung treatments was found to be lower than NTCPPBC for each of the eight sets of NTCP parameters; the maximum mean decrease was 4.5%. A mean increase of 4.3% was found when the NTCPAAA was calculated by the parameters evaluated from dose distribution calculated by a convolution-superposition (CS) algorithm. A markedly different pattern was observed for the risk relating to the development of pneumonitis following breast treatments: the AAA predicted higher NTCP value. The mean NTCPAAA varied from 0.2% (sd = 0.1%) to 2.1% (sd = 0.3%), while the mean NTCPPBC varied from 0.1% (sd = 0.0%) to 1.8% (sd = 0.2%) depending on the chosen parameters set. Conclusions When the original PBC treatment plans were recalculated using AAA with the same number of monitor units provided by PBC, the NTCPAAA was lower than the NTCPPBC, except for the breast treatments. The NTCP is strongly affected by the wide-ranging values of radiobiological parameters. PMID:23826854
Bufacchi, Antonella; Nardiello, Barbara; Capparella, Roberto; Begnozzi, Luisa
2013-07-04
Retrospective analysis of 3D clinical treatment plans to investigate qualitative, possible, clinical consequences of the use of PBC versus AAA. The 3D dose distributions of 80 treatment plans at four different tumour sites, produced using PBC algorithm, were recalculated using AAA and the same number of monitor units provided by PBC and clinically delivered to each patient; the consequences of the difference on the dose-effect relations for normal tissue injury were studied by comparing different NTCP model/parameters extracted from a review of published studies. In this study the AAA dose calculation is considered as benchmark data. The paired Student t-test was used for statistical comparison of all results obtained from the use of the two algorithms. In the prostate plans, the AAA predicted lower NTCP value (NTCPAAA) for the risk of late rectal bleeding for each of the seven combinations of NTCP parameters, the maximum mean decrease was 2.2%. In the head-and-neck treatments, each combination of parameters used for the risk of xerostemia from irradiation of the parotid glands involved lower NTCPAAA, that varied from 12.8% (sd=3.0%) to 57.5% (sd=4.0%), while when the PBC algorithm was used the NTCPPBC's ranging was from 15.2% (sd=2.7%) to 63.8% (sd=3.8%), according the combination of parameters used; the differences were statistically significant. Also NTCPAAA regarding the risk of radiation pneumonitis in the lung treatments was found to be lower than NTCPPBC for each of the eight sets of NTCP parameters; the maximum mean decrease was 4.5%. A mean increase of 4.3% was found when the NTCPAAA was calculated by the parameters evaluated from dose distribution calculated by a convolution-superposition (CS) algorithm. A markedly different pattern was observed for the risk relating to the development of pneumonitis following breast treatments: the AAA predicted higher NTCP value. The mean NTCPAAA varied from 0.2% (sd = 0.1%) to 2.1% (sd = 0.3%), while the mean NTCPPBC varied from 0.1% (sd = 0.0%) to 1.8% (sd = 0.2%) depending on the chosen parameters set. When the original PBC treatment plans were recalculated using AAA with the same number of monitor units provided by PBC, the NTCPAAA was lower than the NTCPPBC, except for the breast treatments. The NTCP is strongly affected by the wide-ranging values of radiobiological parameters.
On Universal Elements, and Conversion Procedures to and from Position and Velocity
1989-07-01
Abstract. An element set is advocated that is familiar (in traditional terms), and yet applicable to all types of orbit without loss of ,: ... accuracy...A Set of Universally Applicable Elements We seek to define a set of universally applicable elements for motion in unperturbed orbits about a centre...respect to 4 , and denote it by , If a particular element set can be chosen that covers every type of orbit , then in principle we regard these elements
Localized basis sets for unbound electrons in nanoelectronics.
Soriano, D; Jacob, D; Palacios, J J
2008-02-21
It is shown how unbound electron wave functions can be expanded in a suitably chosen localized basis sets for any desired range of energies. In particular, we focus on the use of Gaussian basis sets, commonly used in first-principles codes. The possible usefulness of these basis sets in a first-principles description of field emission or scanning tunneling microscopy at large bias is illustrated by studying a simpler related phenomenon: The lifetime of an electron in a H atom subjected to a strong electric field.
JPRS Report, Science & Technology, Japan.
1988-05-04
360 tons of HAP ( hydroxyapatite ) for medical applications, as food additives, and for use in tooth pastes is imported. There are plenty of raw...pressure ratio (qr = PjUj 2/pmum 2, equal to about 1.5) of the jet to the mainflow are chosen for the operating parameters of the jet. Quartz glass for...intermediate flow. The parameters which affect self-ignition and flame maintenance in an actual supersonic burner are the size of the recirculation
Simulation of a Radio-Frequency Photogun for the Generation of Ultrashort Beams
NASA Astrophysics Data System (ADS)
Nikiforov, D. A.; Levichev, A. E.; Barnyakov, A. M.; Andrianov, A. V.; Samoilov, S. L.
2018-04-01
A radio-frequency photogun for the generation of ultrashort electron beams to be used in fast electron diffractoscopy, wakefield acceleration experiments, and the design of accelerating structures of the millimeter range is modeled. The beam parameters at the photogun output needed for each type of experiment are determined. The general outline of the photogun is given, its electrodynamic parameters are calculated, and the accelerating field distribution is obtained. The particle dynamics is analyzed in the context of the required output beam parameters. The optimal initial beam characteristics and field amplitudes are chosen. A conclusion is made regarding the obtained beam parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenzie, IV, George Espy; Goda, Joetta Marie; Grove, Travis Justin
This paper examines the comparison of MCNP® code’s capability to calculate kinetics parameters effectively for a thermal system containing highly enriched uranium (HEU). The Rossi-α parameter was chosen for this examination because it is relatively easy to measure as well as easy to calculate using MCNP®’s kopts card. The Rossi-α also incorporates many other parameters of interest in nuclear kinetics most of which are more difficult to precisely measure. The comparison looks at two different nuclear data libraries for comparison to the experimental data. These libraries are ENDF/BVI (.66c) and ENDF/BVII (.80c).
Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bush, Lance B.
1992-01-01
An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
USDA-ARS?s Scientific Manuscript database
A standardized set of 12 microsatellite markers, previously agreed upon following an ECP/GR workshop in 2006, was used to screen accessions from the UK National Pear Collection at Brogdale and from the US National Pear Germplasm Repository (NCGR), Corvallis. Eight standard varieties were chosen from...
ERIC Educational Resources Information Center
Boyer, Leanna; Roth, Wolff-Michael
2006-01-01
Around the world, many people concerned with the state of the environment participate in environmental action groups. Much of their learning occurs informally, simply by participating in the everyday, ongoing collective life of the chosen group. Such settings provide unique opportunities for studying how people learn science in complex settings…
The Use of the Ambulatory Setting for Patient Self-Education.
ERIC Educational Resources Information Center
Newkirk, Gary; And Others
1979-01-01
A self-instructional health education program that utilizes a slide-tape device was studied to determine whether it could be educationally effective in an ambulatory clinical setting without being an inconvenience to patients. Infant and child nutrition was chosen as the topic to be used in the waiting room of a pediatric clinic. (JMD)
NASA Astrophysics Data System (ADS)
Affendi, I. H. H.; Sarah, M. S. P.; Alrokayan, Salman A. H.; Khan, Haseeb A.; Rusop, M.
2018-05-01
Sol-gel spin coating method is used in the production of nanostructured TiO2 thin film. The surface topology and morphology was observed using the Atomic Force Microscopy (AFM) and Field Emission Scanning Electron Microscopy (FESEM). The electrical properties were investigated by using two probe current-voltage (I-V) measurements to study the electrical resistivity behavior, hence the conductivity of the thin film. The solution concentration will be varied from 14.0 to 0.01wt% with 0.02wt% interval where the last concentration of 0.02 to 0.01wt% have 0.01wt% interval to find which concentrations have the highest conductivity then the optimized concentration's sample were chosen for the thickness parameter based on layer by layer deposition from 1 to 6 layer. Based on the result, the lowest concentration of TiO2, the surface becomes more uniform and the conductivity will increase. As the result, sample of 0.01wt% concentration have conductivity value of 1.77E-10 S/m and will be advanced in thickness parameter. Whereas in thickness parameter, the 3layer deposition were chosen as its conductivity is the highest at 3.9098E9 S/m.
Bayesian Inference for Generalized Linear Models for Spiking Neurons
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
NASA Technical Reports Server (NTRS)
Theofylaktos, Onoufrios; Warner, Joseph D.; Sheehe, Charles J.
2012-01-01
An experiment was performed to determine the degradation in the bit-error-rate (BER) in the high-data-rate cables chosen for the Orion Service Module due to extreme launch conditions of vibrations with a magnitude of 60g. The cable type chosen for the Orion Service Module was no. 8 quadrax cable. The increase in electrical noise induced on these no. 8 quadrax cables was measured at the NASA Glenn vibration facility in the Structural Dynamics Laboratory. The intensity of the vibrations was set at 32g, which was the maximum available level at the facility. The cable lengths used during measurements were 1, 4, and 8 m. The noise measurements were done in an analog fashion using a performance network analyzer (PNA) by recording the standard deviation of the transmission scattering parameter S(sub 21) over the frequency range of 100 to 900 MHz. The standard deviation of S(sub 210 was measured before, during, and after the vibration of the cables at the vibration facility. We observed an increase in noise by a factor of 2 to 6. From these measurements we estimated the increase expected in the BER for a cable length of 25 m and concluded that these findings are large enough that the noise increase due to vibration must be taken in to account for the design of the communication system for a BER of 10(exp -8).
Investigations of Section Speed on Rural Roads in Podlaskie Voivodeship
NASA Astrophysics Data System (ADS)
Ziolkowski, Robert
2017-10-01
Excessive speed is one of the most important factors considered in road safety and not only affects the severity of a crash but is also related to the risk of being involved in a crash. In Poland the problem of speeding drivers is widely common. Properly recognized and defined drivers behaviour is the base for any effective activities taken towards road safety improvements. Effective enforcement of speed limits especially on rural road plays an important role but conducted speed investigations basically focus on spot speed omitting travel speed on longer sections of roads which can better reflect driver’s behaviour. Possible solutions for rural roads are limited to administrative means of speed limitations, installations of speed cameras and police enforcement. However due to their limited proved effectiveness new solutions are still being sought. High expectations are associated with the sectional speed system that has recently been introduced in Poland and covered a number of national road sections. The aim of this paper is to investigate section speed on chosen regional and district roads located in Podlaskie Voivodeship. Test sections included 19 road segments varied in terms of functional and geometric characteristics. Speed measurements on regional and district roads were performed with the use of a set of two ANPR (Automatic Number Plate Recognition) cameras. Conducted research allowed to compare driver’s behaviour in terms of travel speed depending on roads’ functional classification as well as to evaluate the influence of chosen geometric parameters on average section speed.
A designed screening study with prespecified combinations of factor settings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-cook, Christine M; Robinson, Timothy J
2009-01-01
In many applications, the experimenter has limited options about what factor combinations can be chosen for a designed study. Consider a screening study for a production process involving five input factors whose levels have been previously established. The goal of the study is to understand the effect of each factor on the response, a variable that is expensive to measure and results in destruction of the part. From an inventory of available parts with known factor values, we wish to identify a best collection of factor combinations with which to estimate the factor effects. Though the observational nature of themore » study cannot establish a causal relationship involving the response and the factors, the study can increase understanding of the underlying process. The study can also help determine where investment should be made to control input factors during production that will maximally influence the response. Because the factor combinations are observational, the chosen model matrix will be nonorthogonal and will not allow independent estimation of factor effects. In this manuscript we borrow principles from design of experiments to suggest an 'optimal' selection of factor combinations. Specifically, we consider precision of model parameter estimates, the issue of replication, and abilities to detect lack of fit and to estimate two-factor interactions. Through an example, we present strategies for selecting a subset of factor combinations that simultaneously balance multiple objectives, conduct a limited sensitivity analysis, and provide practical guidance for implementing our techniques across a variety of quality engineering disciplines.« less
Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten
2011-05-01
To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.
Algorithm Sorts Groups Of Data
NASA Technical Reports Server (NTRS)
Evans, J. D.
1987-01-01
For efficient sorting, algorithm finds set containing minimum or maximum most significant data. Sets of data sorted as desired. Sorting process simplified by reduction of each multielement set of data to single representative number. First, each set of data expressed as polynomial with suitably chosen base, using elements of set as coefficients. Most significant element placed in term containing largest exponent. Base selected by examining range in value of data elements. Resulting series summed to yield single representative number. Numbers easily sorted, and each such number converted back to original set of data by successive division. Program written in BASIC.
NASA Astrophysics Data System (ADS)
Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.
2015-03-01
During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.
Inferring the gravitational potential of the Milky Way with a few precisely measured stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price-Whelan, Adrian M.; Johnston, Kathryn V.; Hendel, David
2014-10-10
The dark matter halo of the Milky Way is expected to be triaxial and filled with substructure. It is hoped that streams or shells of stars produced by tidal disruption of stellar systems will provide precise measures of the gravitational potential to test these predictions. We develop a method for inferring the Galactic potential with tidal streams based on the idea that the stream stars were once close in phase space. Our method can flexibly adapt to any form for the Galactic potential: it works in phase-space rather than action-space and hence relies neither on our ability to derive actionsmore » nor on the integrability of the potential. Our model is probabilistic, with a likelihood function and priors on the parameters. The method can properly account for finite observational uncertainties and missing data dimensions. We test our method on synthetic data sets generated from N-body simulations of satellite disruption in a static, multi-component Milky Way, including a triaxial dark matter halo with observational uncertainties chosen to mimic current and near-future surveys of various stars. We find that with just eight well-measured stream stars, we can infer properties of a triaxial potential with precisions of the order of 5%-7%. Without proper motions, we obtain 10% constraints on most potential parameters and precisions around 5%-10% for recovering missing phase-space coordinates. These results are encouraging for the goal of using flexible, time-dependent potential models combined with larger data sets to unravel the detailed shape of the dark matter distribution around the Milky Way.« less
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
Solar Prominence Modelling and Plasma Diagnostics at ALMA Wavelengths
NASA Astrophysics Data System (ADS)
Rodger, Andrew; Labrosse, Nicolas
2017-09-01
Our aim is to test potential solar prominence plasma diagnostics as obtained with the new solar capability of the Atacama Large Millimeter/submillimeter Array (ALMA). We investigate the thermal and plasma diagnostic potential of ALMA for solar prominences through the computation of brightness temperatures at ALMA wavelengths. The brightness temperature, for a chosen line of sight, is calculated using the densities of electrons, hydrogen, and helium obtained from a radiative transfer code under non-local thermodynamic equilibrium (non-LTE) conditions, as well as the input internal parameters of the prominence model in consideration. Two distinct sets of prominence models were used: isothermal-isobaric fine-structure threads, and large-scale structures with radially increasing temperature distributions representing the prominence-to-corona transition region. We compute brightness temperatures over the range of wavelengths in which ALMA is capable of observing (0.32 - 9.6 mm), however, we particularly focus on the bands available to solar observers in ALMA cycles 4 and 5, namely 2.6 - 3.6 mm (Band 3) and 1.1 - 1.4 mm (Band 6). We show how the computed brightness temperatures and optical thicknesses in our models vary with the plasma parameters (temperature and pressure) and the wavelength of observation. We then study how ALMA observables such as the ratio of brightness temperatures at two frequencies can be used to estimate the optical thickness and the emission measure for isothermal and non-isothermal prominences. From this study we conclude that for both sets of models, ALMA presents a strong thermal diagnostic capability, provided that the interpretation of observations is supported by the use of non-LTE simulation results.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Borrok, D.; Turner, B.F.; Fein, J.B.
2005-01-01
Adsorption onto bacterial cell walls can significantly affect the speciation and mobility of aqueous metal cations in many geologic settings. However, a unified thermodynamic framework for describing bacterial adsorption reactions does not exist. This problem originates from the numerous approaches that have been chosen for modeling bacterial surface protonation reactions. In this study, we compile all currently available potentiometric titration datasets for individual bacterial species, bacterial consortia, and bacterial cell wall components. Using a consistent, four discrete site, non-electrostatic surface complexation model, we determine total functional group site densities for all suitable datasets, and present an averaged set of 'universal' thermodynamic proton binding and site density parameters for modeling bacterial adsorption reactions in geologic systems. Modeling results demonstrate that the total concentrations of proton-active functional group sites for the 36 bacterial species and consortia tested are remarkably similar, averaging 3.2 ?? 1.0 (1??) ?? 10-4 moles/wet gram. Examination of the uncertainties involved in the development of proton-binding modeling parameters suggests that ignoring factors such as bacterial species, ionic strength, temperature, and growth conditions introduces relatively small error compared to the unavoidable uncertainty associated with the determination of cell abundances in realistic geologic systems. Hence, we propose that reasonable estimates of the extent of bacterial cell wall deprotonation can be made using averaged thermodynamic modeling parameters from all of the experiments that are considered in this study, regardless of bacterial species used, ionic strength, temperature, or growth condition of the experiment. The average site densities for the four discrete sites are 1.1 ?? 0.7 ?? 10-4, 9.1 ?? 3.8 ?? 10-5, 5.3 ?? 2.1 ?? 10-5, and 6.6 ?? 3.0 ?? 10-5 moles/wet gram bacteria for the sites with pKa values of 3.1, 4.7, 6.6, and 9.0, respectively. It is our hope that this thermodynamic framework for modeling bacteria-proton binding reactions will also provide the basis for the development of an internally consistent set of bacteria-metal binding constants. 'Universal' constants for bacteria-metal binding reactions can then be used in conjunction with equilibrium constants for other important metal adsorption and complexation reactions to calculate the overall distribution of metals in realistic geologic systems.
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
NASA Astrophysics Data System (ADS)
Kovacs, S.; Beier, T.; Woestmann, S.
2017-09-01
The demands on materials for automotive applications are steadily increasing. For chassis components, the trend is towards thinner and higher strength materials for weight and cost reduction. In view of attainable strengths of up to 1200 MPa for hot rolled materials, certain aspects need to be analysed and evaluated in advance in the development process using these materials. Collars in particular, for example in control arms, have been in focus for part and process design. Issues concerning edge and surface cracks are observed due to improper geometry and process layout. The hole expansion capability of the chosen material grade has direct influence on the achievable collar height. In general, shear cutting reduces the residual formability of blank edges and the hole expansion capability. In this paper, using the example of the complex phase steel CP-W® 800 of thyssenkrupp, it is shown how a suitable geometry of a collar and optimum shear cutting parameters can be chosen.
A Comparison of the Forecast Skills among Three Numerical Models
NASA Astrophysics Data System (ADS)
Lu, D.; Reddy, S. R.; White, L. J.
2003-12-01
Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.
Online SAXS investigations of polymeric hollow fibre membranes.
Pranzas, P Klaus; Knöchel, Arndt; Kneifel, Klemens; Kamusewitz, Helmut; Weigel, Thomas; Gehrke, Rainer; Funari, Sérgio S; Willumeit, Regine
2003-07-01
Polymeric membranes are used in industrial and analytical separation techniques. In this study small-angle X-ray scattering (SAXS) with synchrotron radiation has been applied for in-situ characterisation during formation of polymeric membranes. The spinning of a polyetherimide (PEI) hollow fibre membrane was chosen for investigation of dynamic aggregation processes during membrane formation, because it allows the measurement of the dynamic equilibrium at different distances from the spinning nozzle. With this system it is possible to resolve structural changes in the nm-size range which occur during membrane formation on the time-scale of milliseconds. Integral structural parameters, like radius of gyration and pair-distance distribution, were determined. Depending on the chosen spinning parameters, e.g. the flow ratio between polymer solution and coagulant water, significant changes in the scattering curves have been observed. The data are correlated with the distance from the spinning nozzle in order to get information about the kinetics of membrane formation which has fundamental influence on structure and properties of the membrane.
NASA Astrophysics Data System (ADS)
Wisniewski, H.; Gourdain, P.-A.
2017-10-01
APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.
2011-02-01
only a couple of processing parameters. Table 2 Statistical results of the DOE Run no. Plasma power Feed rate System pressure Quench rate...and quench rate. Particle size was chosen as the measured response due to its predominant effect on material properties. The results of the DOE...showed that feed rate and quench rate have the largest effect on particle size. All synthesized powders were characterized by thermogravimetric
NASA Astrophysics Data System (ADS)
George, N. J.; Akpan, A. E.; Akpan, F. S.
2017-12-01
An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
A method of evaluating quantitative magnetospheric field models by an angular parameter alpha
NASA Technical Reports Server (NTRS)
Sugiura, M.; Poros, D. J.
1979-01-01
The paper introduces an angular parameter, termed alpha, which represents the angular difference between the observed, or model, field and the internal model field. The study discusses why this parameter is chosen and demonstrates its usefulness by applying it to both observations and models. In certain areas alpha is more sensitive than delta-B (the difference between the magnitude of the observed magnetic field and that of the earth's internal field calculated from a spherical harmonic expansion) in expressing magnetospheric field distortions. It is recommended to use both alpha and delta-B in comparing models with observations.
Raman spectroscopy for the control of the atmospheric bioindicators
NASA Astrophysics Data System (ADS)
Timchenko, E. V.; Timchenko, P. E.; Shamina, L. A.; Zherdeva, L. A.
2015-09-01
Experimental studies of optical parameters of different atmospheric bioindicators (arboreous and terricolous types of plants) have been performed with Raman spectroscopy. The change in the optical parameters has been explored for the objects under direct light exposure, as well as for the objects placed in the shade. The age peculiarities of the bioindicators have also been taken into consideration. It was established that the statistical variability of optical parameters for arboreous bioindicators was from 9% to 15% and for plants from 4% to 8.7%. On the basis of these results dandelion (Taraxacum) was chosen as a bioindicator of atmospheric emissions.
The Cut-Score Operating Function: A New Tool to Aid in Standard Setting
ERIC Educational Resources Information Center
Grabovsky, Irina; Wainer, Howard
2017-01-01
In this essay, we describe the construction and use of the Cut-Score Operating Function in aiding standard setting decisions. The Cut-Score Operating Function shows the relation between the cut-score chosen and the consequent error rate. It allows error rates to be defined by multiple loss functions and will show the behavior of each loss…
Frequently Asked Questions about Bunion Surgery
... supports or orthotics in their shoe. 15. If screws or plates are implanted in my foot to correct my bunion, will they set off metal detectors? Not usually. It can depend on the device chosen for your ...
Grammar A and Grammar B: Rhetorical Life and Death.
ERIC Educational Resources Information Center
Guinn, Dorothy Margaret
In the past, writers have chosen stylistic devices within the parameters of the traditional grammar of style, "Grammar A," characterized by analyticity, coherence, and clarity. But many contemporary writers are creating a new grammar of style, "Grammar B," characterized by synchronicity, discontinuity, and ambiguity, which…
Code of Federal Regulations, 2010 CFR
2010-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... radioactive waste within a designated boundary. Design bases means that information that identifies the... values or ranges of values chosen for controlling parameters as reference bounds for design. These values... events to be used for deriving design bases that will be based on consideration of historical data on the...
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
NASA Technical Reports Server (NTRS)
Karmali, M. S.; Phatak, A. V.
1982-01-01
Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.
Lai, Zhi-Hui; Leng, Yong-Gang
2015-08-28
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application.
Design of asymptotic estimators: an approach based on neural networks and nonlinear programming.
Alessandri, Angelo; Cervellera, Cristiano; Sanguineti, Marcello
2007-01-01
A methodology to design state estimators for a class of nonlinear continuous-time dynamic systems that is based on neural networks and nonlinear programming is proposed. The estimator has the structure of a Luenberger observer with a linear gain and a parameterized (in general, nonlinear) function, whose argument is an innovation term representing the difference between the current measurement and its prediction. The problem of the estimator design consists in finding the values of the gain and of the parameters that guarantee the asymptotic stability of the estimation error. Toward this end, if a neural network is used to take on this function, the parameters (i.e., the neural weights) are chosen, together with the gain, by constraining the derivative of a quadratic Lyapunov function for the estimation error to be negative definite on a given compact set. It is proved that it is sufficient to impose the negative definiteness of such a derivative only on a suitably dense grid of sampling points. The gain is determined by solving a Lyapunov equation. The neural weights are searched for via nonlinear programming by minimizing a cost penalizing grid-point constraints that are not satisfied. Techniques based on low-discrepancy sequences are applied to deal with a small number of sampling points, and, hence, to reduce the computational burden required to optimize the parameters. Numerical results are reported and comparisons with those obtained by the extended Kalman filter are made.
A maximally particle-hole asymmetric spectrum emanating from a semi-Dirac point.
Quan, Yundi; Pickett, Warren E
2018-02-21
Tight binding models have proven an effective means of revealing Dirac (massless) dispersion, flat bands (infinite mass), and intermediate cases such as the semi-Dirac (sD) dispersion. This approach is extended to a three band model that yields, with chosen parameters in a two-band limit, a closed line with maximally asymmetric particle-hole dispersion: infinite mass holes, zero mass particles. The model retains the sD points for a general set of parameters. Adjacent to this limiting case, hole Fermi surfaces are tiny and needle-like. A pair of large electron Fermi surfaces at low doping merge and collapse at half filling to a flat (zero energy) closed contour with infinite mass along the contour and enclosing no carriers on either side, while the hole Fermi surface has shrunk to a point at zero energy, also containing no carriers. The tight binding model is used to study several characteristics of the dispersion and density of states. The model inspired generalization of sD dispersion to a general ±[Formula: see text] form, for which analysis reveals that both n and m must be odd to provide a diabolical point with topological character. Evolution of the Hofstadter spectrum of this three band system with interband coupling strength is presented and discussed.
A maximally particle-hole asymmetric spectrum emanating from a semi-Dirac point
NASA Astrophysics Data System (ADS)
Quan, Yundi; Pickett, Warren E.
2018-02-01
Tight binding models have proven an effective means of revealing Dirac (massless) dispersion, flat bands (infinite mass), and intermediate cases such as the semi-Dirac (sD) dispersion. This approach is extended to a three band model that yields, with chosen parameters in a two-band limit, a closed line with maximally asymmetric particle-hole dispersion: infinite mass holes, zero mass particles. The model retains the sD points for a general set of parameters. Adjacent to this limiting case, hole Fermi surfaces are tiny and needle-like. A pair of large electron Fermi surfaces at low doping merge and collapse at half filling to a flat (zero energy) closed contour with infinite mass along the contour and enclosing no carriers on either side, while the hole Fermi surface has shrunk to a point at zero energy, also containing no carriers. The tight binding model is used to study several characteristics of the dispersion and density of states. The model inspired generalization of sD dispersion to a general ± \\sqrt{k_x2n +k_y2m} form, for which analysis reveals that both n and m must be odd to provide a diabolical point with topological character. Evolution of the Hofstadter spectrum of this three band system with interband coupling strength is presented and discussed.
An adaptive semi-Lagrangian advection model for transport of volcanic emissions in the atmosphere
NASA Astrophysics Data System (ADS)
Gerwing, Elena; Hort, Matthias; Behrens, Jörn; Langmann, Bärbel
2018-06-01
The dispersion of volcanic emissions in the Earth atmosphere is of interest for climate research, air traffic control and human wellbeing. Current volcanic emission dispersion models rely on fixed-grid structures that often are not able to resolve the fine filamented structure of volcanic emissions being transported in the atmosphere. Here we extend an existing adaptive semi-Lagrangian advection model for volcanic emissions including the sedimentation of volcanic ash. The advection of volcanic emissions is driven by a precalculated wind field. For evaluation of the model, the explosive eruption of Mount Pinatubo in June 1991 is chosen, which was one of the largest eruptions in the 20th century. We compare our simulations of the climactic eruption on 15 June 1991 to satellite data of the Pinatubo ash cloud and evaluate different sets of input parameters. We could reproduce the general advection of the Pinatubo ash cloud and, owing to the adaptive mesh, simulations could be performed at a high local resolution while minimizing computational cost. Differences to the observed ash cloud are attributed to uncertainties in the input parameters and the course of Typhoon Yunya, which is probably not completely resolved in the wind data used to drive the model. The best results were achieved for simulations with multiple ash particle sizes.
The impact of climate change on river discharges in Eastern Romania
NASA Astrophysics Data System (ADS)
Croitoru, Adina-Eliza; Minea, Ionut
2014-05-01
Climate changes imply many changes in different socioeconomic and environmental fields. Among the most important impacts are changes in water resources. Long- and mid-term river discharge flow analysis is essential for the effective management of water resources. In this work, the changes in two climatic parameters (temperature and precipitation) and river discharges and the connections between precipitation and river discharges were investigated. Seasonal and annual climatic and hydrological data collected at six weather stations and 17 hydrological stations were employed. The data sets cover 57 years (1950-2006). The modified Mann-Kendall test was used to calculate trends, and the Bravais-Pearson correlation index was chosen to detect the connections between precipitation and river discharge data series. The main findings are as follows: A general increase was identified in all the three parameters. The air temperature data series showed the highest frequency of statistically significant slopes, mainly in annual and spring series. All data series, except the series for winter, showed an increase in precipitation; in winter, a significant decrease in precipitation was observed at most of the stations. The increase in precipitation is reflected in the upward trends of the river discharge flows, as verified by the good Bravais-Pearson correlations, mainly for annual, summer, and autumn series
Kischkel, Sabine; Miekisch, Wolfram; Sawacki, Annika; Straker, Eva M; Trefz, Phillip; Amann, Anton; Schubert, Jochen K
2010-11-11
Up to now, none of the breath biomarkers or marker sets proposed for cancer recognition has reached clinical relevance. Possible reasons are the lack of standardized methods of sampling, analysis and data processing and effects of environmental contaminants. Concentration profiles of endogenous and exogenous breath markers were determined in exhaled breath of 31 lung cancer patients, 31 smokers and 31 healthy controls by means of SPME-GC-MS. Different correcting and normalization algorithms and a principal component analysis were applied to the data. Differences of exhalation profiles in cancer and non-cancer patients did not persist if physiology and confounding variables were taken into account. Smoking history, inspired substance concentrations, age and gender were recognized as the most important confounding variables. Normalization onto PCO2 or BSA or correction for inspired concentrations only partially solved the problem. In contrast, previous smoking behaviour could be recognized unequivocally. Exhaled substance concentrations may depend on a variety of parameters other than the disease under investigation. Normalization and correcting parameters have to be chosen with care as compensating effects may be different from one substance to the other. Only well-founded biomarker identification, normalization and data processing will provide clinically relevant information from breath analysis. 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.
2010-01-01
Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project
Mixing rates and limit theorems for random intermittent maps
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Bose, Christopher
2016-04-01
We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.
MX Survivability: Passive and Active Defense.
1982-03-01
coefficient of determination (K2) and model parameters (i.e., bo, b1 , b2 , and b3 ) significantly different from zero: MX Survivability = 0 + b1X1...following equation was chosen as the best fit for the data: MX Survivability - b° + blX1 , - 0.881 where F-Ratio b = .2884 49.26 b1 - .02695 112.46 X1...that all of the model parameters estimated (i.e., b and b1 ) are significantly different from zero. Substituting 60% MX survivability into this equation
Player Modeling for Intelligent Difficulty Adjustment
NASA Astrophysics Data System (ADS)
Missura, Olana; Gärtner, Thomas
In this paper we aim at automatically adjusting the difficulty of computer games by clustering players into different types and supervised prediction of the type from short traces of gameplay. An important ingredient of video games is to challenge players by providing them with tasks of appropriate and increasing difficulty. How this difficulty should be chosen and increase over time strongly depends on the ability, experience, perception and learning curve of each individual player. It is a subjective parameter that is very difficult to set. Wrong choices can easily lead to players stopping to play the game as they get bored (if underburdened) or frustrated (if overburdened). An ideal game should be able to adjust its difficulty dynamically governed by the player’s performance. Modern video games utilise a game-testing process to investigate among other factors the perceived difficulty for a multitude of players. In this paper, we investigate how machine learning techniques can be used for automatic difficulty adjustment. Our experiments confirm the potential of machine learning in this application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vins, M.
This contribution overviews neutron spectrum measurement, which was done on training reactor VR-1 Sparrow with a new nuclear fuel. Former nuclear fuel IRT-3M was changed for current nuclear fuel IRT-4M with lower enrichment of 235U (enrichment was reduced from former 36% to 20%) in terms of Reduced Enrichment for Research and Test Reactors (RERTR) Program. Neutron spectrum measurement was obtained by irradiation of activation foils at the end of pipe of rabit system and consecutive deconvolution of obtained saturated activities. Deconvolution was performed by computer iterative code SAND-II with 620 groups' structure. All gamma measurements were performed on Canberra HPGe.more » Activation foils were chosen according physical and nuclear parameters from the set of certificated foils. The Resulting differential flux at the end of pipe of rabit system agreed well with typical spectrum of light water reactor. Measurement of neutron spectrum has brought better knowledge about new reactor core C1 and improved methodology of activation measurement. (author)« less
Tumpa, Anja; Stajić, Ana; Jančić-Stojanović, Biljana; Medenica, Mirjana
2017-02-05
This paper deals with the development of hydrophilic interaction liquid chromatography (HILIC) method with gradient elution, in accordance with Analytical Quality by Design (AQbD) methodology, for the first time. The method is developed for olanzapine and its seven related substances. Following step by step AQbD methodology, firstly as critical process parameters (CPPs) temperature, starting content of aqueous phase and duration of linear gradient are recognized, and as critical quality attributes (CQAs) separation criterion S of critical pairs of substances are investigated. Rechtschaffen design is used for the creation of models that describe the dependence between CPPs and CQAs. The design space that is obtained at the end is used for choosing the optimal conditions (set point). The method is fully validated at the end to verify the adequacy of the chosen optimal conditions and applied to real samples. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Campos, Carmina del Rio; Horche, Paloma R.; Martin-Minguez, Alfredo
2011-03-01
Due to the fact that a metro network market is very cost sensitive, direct modulated schemes appear attractive. In this paper a CWDM (Coarse Wavelength Division Multiplexing) system is studied in detail by means of an Optical Communication System Design Software; a detailed study of the modulated current shape (exponential, sine and gaussian) for 2.5 Gb/s CWDM Metropolitan Area Networks is performed to evaluate its tolerance to linear impairments such as signal-to-noise-ratio degradation and dispersion. Point-to-point links are investigated and optimum design parameters are obtained. Through extensive sets of simulation results, it is shown that some of these shape pulses are more tolerant to dispersion when compared with conventional gaussian shape pulses. In order to achieve a low Bit Error Rate (BER), different types of optical transmitters are considered including strongly adiabatic and transient chirp dominated Directly Modulated Lasers (DMLs). We have used fibers with different dispersion characteristics, showing that the system performance depends, strongly, on the chosen DML-fiber couple.
Denaturation of proteins near polar surfaces
NASA Astrophysics Data System (ADS)
Starzyk, Anna; Cieplak, Marek
2011-12-01
All-atom molecular dynamics simulations for proteins placed near a model mica surface indicate existence of two types of evolution. One type leads to the surface-induced unfolding and the other just to a deformation. The two behaviors are characterized by distinct properties of the radius of gyration and of a novel distortion parameter that distinguishes between elongated, globular, and planar shapes. They also differ in the nature of their single site diffusion and two-site distance fluctuations. The four proteins chosen for the studies, the tryptophan cage, protein G, hydrophobin and lyzozyme, are small to allow for a fair determination of the forces generated by the surface as the effects of finite cutoffs in the Coulombic interactions are thus minimized. When the net charge on the surface is set to zero artificially, infliction of deformation is seen to persists but no unfolding takes place. Unfolding may also be prevented by a cluster of disulfide bonds, as we observe in simulations of hydrophobin.
Super-resolution for scanning light stimulation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bitzer, L. A.; Neumann, K.; Benson, N., E-mail: niels.benson@uni-due.de
Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems.more » To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.« less
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg
2003-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Developing a Long-term Monitoring Program with Undergraduate Students in Marine Sciences
NASA Astrophysics Data System (ADS)
Anders, T. M.; Boryta, M. D.
2015-12-01
A goal of our growing marine geoscience program at Mt. San Antonio College is to involve our students in all stages of developing and running an undergraduate research project. During the initial planning phase, students develop and test their proposals. Instructor-set parameters were chosen carefully to help guide students toward manageable projects but to not limit their creativity. Projects should focus on long-term monitoring of a coastal area in southern California. During the second phase, incoming students will critique the initial proposals, modify as necessary and continue to develop the project. We intend for data collection opportunities to grow from geological and oceanographic bases to eventually include other STEM topics in biology, chemistry, math and GIS. Questions we will address include: What makes this a good research project for a community college? What are the costs and time commitments involved? How will the project benefit students and society? Additionally we will share our initial results, challenges, and unexpected pitfalls and benefits.
ACCELERATORS: Beam based alignment of the SSRF storage ring
NASA Astrophysics Data System (ADS)
Zhang, Man-Zhou; Li, Hao-Hu; Jiang, Bo-Cheng; Liu, Gui-Min; Li, De-Ming
2009-04-01
There are 140 beam position monitors (BPMs) in the Shanghai Synchrotron Radiation Facility (SSRF) storage ring used for measuring the closed orbit. As the BPM pickup electrodes are assembled directly on the vacuum chamber, it is important to calibrate the electrical center offset of the BPM to an adjacent quadrupole magnetic center. A beam based alignment (BBA) method which varies individual quadrupole magnet strength and observes its effects on the orbit is used to measure the BPM offsets in both the horizontal and vertical planes. It is a completely automated technique with various data processing methods. There are several parameters such as the strength change of the correctors and the quadrupoles which should be chosen carefully in real measurement. After several rounds of BBA measurement and closed orbit correction, these offsets are set to an accuracy better than 10 μm. In this paper we present the method of beam based calibration of BPMs, the experimental results of the SSRF storage ring, and the error analysis.
Making Ternary Quantum Dots From Single-Source Precursors
NASA Technical Reports Server (NTRS)
Bailey, Sheila; Banger, Kulbinder; Castro, Stephanie; Hepp, Aloysius
2007-01-01
A process has been devised for making ternary (specifically, CuInS2) nanocrystals for use as quantum dots (QDs) in a contemplated next generation of high-efficiency solar photovoltaic cells. The process parameters can be chosen to tailor the sizes (and, thus, the absorption and emission spectra) of the QDs.
Simulating and Testing a DC-DC Half-Bridge SLR Converter
2013-06-01
45 F . TRIAL 4 ..........................................................................................................48 1. Parameters...PE:;;:RM;;,__, ~ 1500 · ! 1000 ~ - 500· 30 60 100 300 600 1000 FREQUHI(Y kHz Or~e o en is chosen, tlte <cku!Jtioo of ~fimar• f wd secondor...6 E. CLOSED FORM EQUATIONS .....................................................................8 F . TRANSFORMER THEORY
Predicting subsurface uranium transport: Mechanistic modeling constrained by experimental data
NASA Astrophysics Data System (ADS)
Ottman, Michael; Schenkeveld, Walter D. C.; Kraemer, Stephan
2017-04-01
Depleted uranium (DU) munitions and their widespread use throughout conflict zones around the world pose a persistent health threat to the inhabitants of those areas long after the conclusion of active combat. However, little emphasis has been put on developing a comprehensive, quantitative tool for use in remediation and hazard avoidance planning in a wide range of environments. In this context, we report experimental data on U interaction with soils and sediments. Here, we strive to improve existing risk assessment modeling paradigms by incorporating a variety of experimental data into a mechanistic U transport model for subsurface environments. 20 different soils and sediments from a variety of environments were chosen to represent a range of geochemical parameters that are relevant to U transport. The parameters included pH, organic matter content, CaCO3, Fe content and speciation, and clay content. pH ranged from 3 to 10, organic matter content from 6 to 120 g kg-1, CaCO3 from 0 to 700 g kg-1, amorphous Fe content from 0.3 to 6 g kg-1 and clay content from 4 to 580 g kg-1. Sorption experiments were then performed, and linear isotherms were constructed. Sorption experiment results show that among separate sets of sediments and soils, there is an inverse correlation between both soil pH and CaCO¬3 concentration relative to U sorptive affinity. The geological materials with the highest and lowest sorptive affinities for U differed in CaCO3 and organic matter concentrations, as well as clay content and pH. In a further step, we are testing if transport behavior in saturated porous media can be predicted based on adsorption isotherms and generic geochemical parameters, and comparing these modeling predictions with the results from column experiments. The comparison of these two data sets will examine if U transport can be effectively predicted from reactive transport modeling that incorporates the generic geochemical parameters. This work will serve to show whether a more mechanistic approach offers an improvement over statistical regression-based risk assessment models.
A robust set of black walnut microsatellites for parentage and clonal identification
Rodney L. Robichaud; Jeffrey C. Glaubitz; Olin E. Rhodes; Keith Woeste
2006-01-01
We describe the development of a robust and powerful suite of 12 microsatellite marker loci for use in genetic investigations of black walnut and related species. These 12 loci were chosen from a set of 17 candidate loci used to genotype 222 trees sampled from a 38-year-old black walnut progeny test. The 222 genotypes represent a sampling from the broad geographic...
Provably secure Rabin-p cryptosystem in hybrid setting
NASA Astrophysics Data System (ADS)
Asbullah, Muhammad Asyraf; Ariffin, Muhammad Rezal Kamel
2016-06-01
In this work, we design an efficient and provably secure hybrid cryptosystem depicted by a combination of the Rabin-p cryptosystem with an appropriate symmetric encryption scheme. We set up a hybrid structure which is proven secure in the sense of indistinguishable against the chosen-ciphertext attack. We presume that the integer factorization problem is hard and the hash function that modeled as a random function.
Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.
Jiménez, Fernando; Sánchez, Gracia; Juárez, José M
2014-03-01
This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.
Ion-irradiation-induced densification of zirconia sol-gel thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levine, T.E.; Giannelis, E.P.; Kodali, P.
1994-02-01
We have investigated the densification behavior of sol-gel zirconia films resulting from ion irradiation. Three sets of films were implanted with neon, krypton, or xenon. The ion energies were chosen to yield approximately constant energy loss through the film and the doses were chosen to yield similar nuclear energy deposition. Ion irradiation of the sol-gel films resulted in carbon and hydrogen loss as indicated by Rutherford backscattering spectrometry and forward recoil energy spectroscopy. Although the densification was hypothesized to result from target atom displacement, the observed densification exhibits a stronger dependence on electronic energy deposition.
Plans for Aeroelastic Prediction Workshop
NASA Technical Reports Server (NTRS)
Heeg, Jennifer; Ballmann, Josef; Bhatia, Kumar; Blades, Eric; Boucke, Alexander; Chwalowski, Pawel; Dietz, Guido; Dowell, Earl; Florance, Jennifer P.; Hansen, Thorsten;
2011-01-01
This paper summarizes the plans for the first Aeroelastic Prediction Workshop. The workshop is designed to assess the state of the art of computational methods for predicting unsteady flow fields and aeroelastic response. The goals are to provide an impartial forum to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify computational and experimental areas needing additional research and development. Three subject configurations have been chosen from existing wind tunnel data sets where there is pertinent experimental data available for comparison. For each case chosen, the wind tunnel testing was conducted using forced oscillation of the model at specified frequencies
A 3D Visualization and Analysis Model of the Earth Orbit, Milankovitch Cycles and Insolation.
NASA Astrophysics Data System (ADS)
Kostadinov, Tihomir; Gilb, Roy
2013-04-01
Milankovitch theory postulates that periodic variability of Earth's orbital elements is a major climate forcing mechanism. Although controversies remain, ample geologic evidence supports the major role of the Milankovitch cycles in climate, e.g. glacial-interglacial cycles. There are three Milankovitch orbital parameters: orbital eccentricity (main periodicities of ~100,000 and ~400,000 years), precession (quantified as the longitude of perihelion, main periodicities 19,000-24,000 years) and obliquity of the ecliptic (Earth's axial tilt, main periodicity 41,000 years). The combination of these parameters controls the spatio-temporal patterns of incoming solar radiation (insolation) and the timing of the seasons with respect to perihelion, as well as season duration. The complex interplay of the Milankovitch orbital parameters on various time scales makes assessment and visualization of Earth's orbit and insolation variability challenging. It is difficult to appreciate the pivotal importance of Kepler's laws of planetary motion in controlling the effects of Milankovitch cycles on insolation patterns. These factors also make Earth-Sun geometry and Milankovitch theory difficult to teach effectively. Here, an astronomically precise and accurate Earth orbit visualization model is presented. The model offers 3D visualizations of Earth's orbital geometry, Milankovitch parameters and the ensuing insolation forcings. Both research and educational uses are envisioned for the model, which is developed in Matlab® as a user-friendly graphical user interface (GUI). We present the user with a choice between the Berger et al. (1978) and Laskar et al. (2004) astronomical solutions for eccentricity, obliquity and precession. A "demo" mode is also available, which allows the three Milankovitch parameters to be varied independently of each other (and over much larger ranges than the naturally occurring ones), so the user can isolate the effects of each parameter on orbital geometry, the seasons, and insolation. Users select a calendar date and the Earth is placed in its orbit using Kepler's laws; the calendar can be started on either vernal equinox (March 20) or perihelion (Jan. 3). Global insolation is computed as a function of latitude and day of year, using the chosen Milankovitch parameters. 3D surface plots of insolation and insolation anomalies (with respect to J2000) are then produced. Insolation computations use the model's own orbital geometry with no additional a-priori input other than the Milankovitch parameter solutions. Insolation computations are successfully validated against Laskar et al. (2004) values. The model outputs other relevant parameters as well, e.g. Earth's radius-vector length, solar declination and day length for the chosen date and latitude. Time-series plots of the Milankovitch parameters and EPICA ice core CO2 and temperature data can be produced. Envisioned future developments include computational efficiency improvements, more options for insolation plots on user-chosen spatio-temporal scales, and overlaying additional paleoclimatological proxy data.
NASA Astrophysics Data System (ADS)
Nishimura, Tomoaki
2016-03-01
A computer simulation program for ion scattering and its graphical user interface (MEISwin) has been developed. Using this program, researchers have analyzed medium-energy ion scattering and Rutherford backscattering spectrometry at Ritsumeikan University since 1998, and at Rutgers University since 2007. The main features of the program are as follows: (1) stopping power can be chosen from five datasets spanning several decades (from 1977 to 2011), (2) straggling can be chosen from two datasets, (3) spectral shape can be selected as Gaussian or exponentially modified Gaussian, (4) scattering cross sections can be selected as Coulomb or screened, (5) simulations adopt the resonant elastic scattering cross section of 16O(4He, 4He)16O, (6) pileup simulation for RBS spectra is supported, (7) natural and specific isotope abundances are supported, and (8) the charge fraction can be chosen from three patterns (fixed, energy-dependent, and ion fraction with charge-exchange parameters for medium-energy ion scattering). This study demonstrates and discusses the simulations and their results.
High Pressure Water Stripping Using Multi-Orifice Nozzles
NASA Technical Reports Server (NTRS)
Hoppe, David
1999-01-01
The use of multi-orifice rotary nozzles greatly increases the speed and stripping effectiveness of high pressure water blasting systems, but also greatly increases the complexity of selecting and optimizing the operating parameters. The rotational speed of the nozzle must be coupled with its transverse velocity as it passes across the surface of the substrate being stripped. The radial and angular positions of each orifice must be included in the analysis of the nozzle configuration. Orifices at the outer edge of the nozzle head move at a faster rate than the orifices located near the center. The energy transmitted to the surface from the impact force of the water stream from an outer orifice is therefore spread over a larger area than energy from an inner orifice. Utilizing a larger diameter orifice in the outer radial positions increases the total energy transmitted from the outer orifice to compensate for the wider distribution of energy. The total flow rate from the combination of all orifices must be monitored and should be kept below the pump capacity while choosing orifice to insert in each position. The energy distribution from the orifice pattern is further complicated since the rotary path of all the orifices in the nozzle head pass through the center section. All orifices contribute to the stripping in the center of the path while only the outer most orifice contributes to the stripping at the edge of the nozzle. Additional orifices contribute to the stripping from the outer edge toward the center section. With all these parameters to configure and each parameter change affecting the others, a computer model was developed to track and coordinate these parameters. The computer simulation graphically indicates the cumulative affect from each parameter selected. The result from the proper choices in parameters is a well designed, highly efficient stripping system. A poorly chosen set of parameters will cause the nozzle to strip aggressively in some areas while leaving the coating untouched in adjacent sections. The high pressure water stripping system can be set to extremely aggressive conditions allowing stripping of hard to remove adhesives, paint systems, and even cladding and chromate conversion coatings. The energy force can also be reduced to strip coatings from thin aluminum substrates without causing any damage or deterioration to the substrate's surface. High pressure water stripping of aerospace components has thus proven to be an efficient and cost effective method for cleaning and removing coatings.
High Pressure Water Stripping Using Multi-Orifice Nozzles
NASA Technical Reports Server (NTRS)
Hoppe, David T.
1998-01-01
The use of multi-orifice rotary nozzles not only increases the speed and stripping effectiveness of high pressure water blasting systems, but also greatly increases the complexity of selecting and optimizing the operating parameters. The rotational speed of the nozzle must be coupled with the transverse velocity of the nozzle as it passes across the surface of the substrate being stripped. The radial and angular positions of each orifice must be included in the analysis of the nozzle configuration. Since orifices at the outer edge of the nozzle head move at a faster rate than the orifice located near the center, the energy impact force of the water stream from the outer orifice is spread over a larger area than the water streams from the inner orifice. Utilizing a larger diameter orifice in the outer radial positions increases the energy impact to compensate for its wider force distribution. The total flow rate from the combination of orifices must be monitored and kept below the pump capacity while choosing an orifice to insert in each position. The energy distribution from the orifice pattern is further complicated since the rotary path of all orifices in the nozzle head pass through the center section, contributing to the stripping in this area while only the outer most orifice contributes to the stripping in the shell area at the extreme outside edge of the nozzle. From t he outer most shell to the center section, more orifices contribute to the stripping in each progressively reduced diameter shell. With all these parameters to configure and each parameter change affecting the others, a computer model was developed to track and coordinate these parameters. The computer simulation responds by graphically indicating the cumulative affect from each parameter selected. The results from the proper choices in parameters is a well designed, highly efficient stripping system. A poorly chosen set of parameters will cause the nozzle to strip aggressively in some areas while leaving the coating untouched in adjacent sections. The high pressure water stripping system can be set to extremely aggressive conditions allowing stripping of hard to remove adhesives, paint systems, cladding and chromate conversion coatings. The energy force can be reduced to strip coatings from thin aluminum substrates without causing damage or deterioration to the substrate's surface. High pressure water stripping of aerospace components have thus proven to be an efficient and cost effective method for cleaning and removing coatings.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN
NASA Astrophysics Data System (ADS)
Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.
2015-12-01
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
Measuring Constraint-Set Utility for Partitional Clustering Algorithms
NASA Technical Reports Server (NTRS)
Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato
2006-01-01
Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.
Lasting monitoring of immune state in patients with coronary atherosclerosis
NASA Astrophysics Data System (ADS)
Malinova, Lidia I.; Denisova, Tatyana P.; Tuchin, Valery V.
2007-02-01
Immune state monitoring is an expensive, invasive and sometimes difficult necessity in patients with different disorders. Immune reaction dynamics study in patients with coronary atherosclerosis provides one of the leading components to complication development, clinical course prognosis and treatment and rehabilitation tactics. We've chosen intravenous glucose injection as metabolic irritant in the following four groups of patients: men with proved coronary atherosclerosis (CA), non insulin dependent diabetes mellitus (NIDDM), men hereditary burden by CA and NIDDM and practically healthy persons with longlivers in generation. Immune state parameters such as quantity of leukocytes and lymphocytes, circulating immune complexes levels, serum immunoglobulin levels, HLA antigen markers were studied at 0, 30 and 60 minutes during glucose loading. To obtain continues time function of studied parameters received data were approximated by polynomials of high degree with after going first derivatives. Time functions analyze elucidate principally different dynamics studied parameters in all chosen groups of patients, which couldn't be obtained from discontinuous data compare. Leukocyte and lymphocyte levels dynamics correlated HLA antigen markers in all studied groups. Analytical estimation of immune state in patients with coronary atherosclerosis shows the functional "margin of safety" of immune system state under glucose disturbance. Proposed method of analytical estimation also can be used in immune system monitoring in other groups of patients.
Lytton, William W; Neymotin, Samuel A; Hines, Michael L
2008-06-30
In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.
Measuring viscosity with a resonant magnetic perturbation in the MST RFP
NASA Astrophysics Data System (ADS)
Fridström, Richard; Munaretto, Stefano; Frassinetti, Lorenzo; Chapman, Brett; Brunsell, Per; Sarff, John; MST Team
2016-10-01
Application of an m = 1 resonant magnetic perturbation (RMP) causes braking and locking of naturally rotating m = 1 tearing modes (TMs) in the MST RFP. The experimental TM dynamics are replicated by a theoretical model including the interaction between the RMP and multiple TMs [Fridström PoP 23, 062504 (2016)]. The viscosity is the only free parameter in the model, and it is chosen such that model TM velocity evolution matches that of the experiment. The model does not depend on the means by which the natural rotation is generated. The chosen value of the viscosity, about 40 m2/s, is consistent with separate measurements in MST using a biased probe to temporarily spin up the plasma. This viscosity is about 100 times larger than the classical prediction, likely due to magnetic stochasticity in the core of these plasmas. Viscosity is a key parameter in visco-resistive MHD codes like NIMROD. The validation of these codes requires measurement of the viscosity over a broad parameter range, which will now be possible with the RMP technique that, unlike the biased probe, is not limited to low-energy-density plasmas. Estimation with the RMP technique of the viscosity in several MST discharges suggests that the viscosity decreases as the electron beta increases. Work supported by USDOE.
Sensitivity Analysis of Delft3d Simulations at Duck, NC, USA
NASA Astrophysics Data System (ADS)
Penko, A.; Boggs, S.; Palmsten, M.
2017-12-01
Our objective is to set up and test Delft3D, a high-resolution coupled wave and circulation model, to provide real-time nowcasts of hydrodynamics at Duck, NC, USA. Here, we test the sensitivity of the model to various parameters and boundary conditions. In order to validate the model simulations we compared the results to observational data. Duck, NC was chosen as our test site due to the extensive array of observational oceanographic, bathymetric, and meteorological data collected by the Army Corps of Engineers Field Research Facility (FRF). Observations were recorded with Acoustic Wave and Current meters (AWAC) at 6-m and 11-m depths as well as a 17-m depth Waverider buoy. The model is set up with an outer and inner nested domain. The outer grid extends 12-km in the along-shore and 3.5-km in the cross-shore with a 50-m resolution and a maximum depth of 17-m. Spectral wave measurements from the 17-m Waverider buoy drove Delft3D-WAVE in the outer grid. We compared the results of five outer grid simulations to wave and current observations collected at the FRF. The model simulations are then compared to the wave and current measurements collected at the 6-m and 11-m AWACs. To determine the best parameters and boundary conditions for the model set up at Duck, we calculated the root mean square error (RMSE) between the simulation results and the observations. Several conclusions were made: 1) The addition of astronomic tides have a significant effect on the circulation magnitude and direction, 2) incorporating an updated bathymetry in the bottom boundary condition has a small effect in shallower (<8-m) depths, 3) decreasing the wave bed friction by 50% did not affect the wave predictions and 4) the accuracy of the simulated wave heights improved as wind and wave forcing at the lateral boundaries were included.
Kern, Madalyn D; Ortega Alcaide, Joan; Rentschler, Mark E
2014-11-01
The objective of this work is to validate an experimental method and nondimensional model for characterizing the normal adhesive response between a polyvinyl chloride based synthetic biological tissue substrate and a flat, cylindrical probe with a smooth polydimethylsiloxane (PDMS) surface. The adhesion response is a critical mobility design parameter of a Robotic Capsule Endoscope (RCE) using PDMS treads to provide mobility to travel through the gastrointestinal tract for diagnostic purposes. Three RCE design characteristics were chosen as input parameters for the normal adhesion testing: pre-load, dwell time and separation rate. These parameters relate to the RCE׳s cross sectional dimension, tread length, and tread speed, respectively. An inscribed central composite design (CCD) prescribed 34 different parameter configurations to be tested. The experimental adhesion response curves were nondimensionalized by the maximum stress and total displacement values for each test configuration and a mean nondimensional curve was defined with a maximum relative error of 5.6%. A mathematical model describing the adhesion behavior as a function of the maximum stress and total displacement was developed and verified. A nonlinear regression analysis was done on the maximum stress and total displacement parameters and equations were defined as a function of the RCE design parameters. The nondimensional adhesion model is able to predict the adhesion curve response of any test configuration with a mean R(2) value of 0.995. Eight additional CCD studies were performed to obtain a qualitative understanding of the impact of tread contact area and synthetic material substrate stiffness on the adhesion response. These results suggest that the nondimensionalization technique for analyzing the adhesion data is sufficient for all values of probe radius and substrate stiffness within the bounds tested. This method can now be used for RCE tread design optimization given a set of environmental conditions for device operation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fron Chabouis, Hélène; Chabouis, Francis; Gillaizeau, Florence; Durieux, Pierre; Chatellier, Gilles; Ruse, N Dorin; Attal, Jean-Pierre
2014-01-01
Operative clinical trials are often small and open-label. Randomization is therefore very important. Stratification and minimization are two randomization options in such trials. The first aim of this study was to compare stratification and minimization in terms of predictability and balance in order to help investigators choose the most appropriate allocation method. Our second aim was to evaluate the influence of various parameters on the performance of these techniques. The created software generated patients according to chosen trial parameters (e.g., number of important prognostic factors, number of operators or centers, etc.) and computed predictability and balance indicators for several stratification and minimization methods over a given number of simulations. Block size and proportion of random allocations could be chosen. A reference trial was chosen (50 patients, 1 prognostic factor, and 2 operators) and eight other trials derived from this reference trial were modeled. Predictability and balance indicators were calculated from 10,000 simulations per trial. Minimization performed better with complex trials (e.g., smaller sample size, increasing number of prognostic factors, and operators); stratification imbalance increased when the number of strata increased. An inverse correlation between imbalance and predictability was observed. A compromise between predictability and imbalance still has to be found by the investigator but our software (HERMES) gives concrete reasons for choosing between stratification and minimization; it can be downloaded free of charge. This software will help investigators choose the appropriate randomization method in future two-arm trials.
Thermal deterioration of virgin olive oil monitored by ATR-FTIR analysis of trans content.
Tena, Noelia; Aparicio, Ramón; García-González, Diego L
2009-11-11
The monitoring of frying oils by an effective and rapid method is one of the demands of food companies and small food retailers. In this work, a method based on ATR-FTIR has been developed for monitoring the oil degradation in frying procedures. The IR bands changing during frying in sunflower, soybean, and virgin olive oils have been examined in their linear relationship with the content of total polar compounds, which is a preferred parameter for frying control. The bands assigned to conjugated and isolated trans double bonds that are commonly used for the determination of trans content provided the best relationships. Then, the area covering 978-960 cm(-1) was chosen to build a model for predicting polar material content for the particular case of virgin olive oil. A virgin olive oil was heated up to 94 h, and samples collected every 2 h constituted the training set. These samples were analyzed to obtain their FTIR spectra and to determine the composition of fatty acids and the content of total polar compounds. The excellent results predicting the polar material content (adjusted R(2) 0.997) was successfully validated with an external set of samples. The analysis of the fatty acid composition confirmed the relationship between the trans content and the content of total polar compounds.
Investigation into the performance of different models for predicting stutter.
Bright, Jo-Anne; Curran, James M; Buckleton, John S
2013-07-01
In this paper we have examined five possible models for the behaviour of the stutter ratio, SR. These were two log-normal models, two gamma models, and a two-component normal mixture model. A two-component normal mixture model was chosen with different behaviours of variance; at each locus SR was described with two distributions, both with the same mean. The distributions have difference variances: one for the majority of the observations and a second for the less well-behaved ones. We apply each model to a set of known single source Identifiler™, NGM SElect™ and PowerPlex(®) 21 DNA profiles to show the applicability of our findings to different data sets. SR determined from the single source profiles were compared to the calculated SR after application of the models. The model performance was tested by calculating the log-likelihoods and comparing the difference in Akaike information criterion (AIC). The two-component normal mixture model systematically outperformed all others, despite the increase in the number of parameters. This model, as well as performing well statistically, has intuitive appeal for forensic biologists and could be implemented in an expert system with a continuous method for DNA interpretation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Experiments on robot-assisted navigated drilling and milling of bones for pedicle screw placement.
Ortmaier, T; Weiss, H; Döbele, S; Schreiber, U
2006-12-01
This article presents experimental results for robot-assisted navigated drilling and milling for pedicle screw placement. The preliminary study was carried out in order to gain first insights into positioning accuracies and machining forces during hands-on robotic spine surgery. Additionally, the results formed the basis for the development of a new robot for surgery. A simplified anatomical model is used to derive the accuracy requirements. The experimental set-up consists of a navigation system and an impedance-controlled light-weight robot holding the surgical instrument. The navigation system is used to position the surgical instrument and to compensate for pose errors during machining. Holes are drilled in artificial bone and bovine spine. A quantitative comparison of the drill-hole diameters was achieved using a computer. The interaction forces and pose errors are discussed with respect to the chosen machining technology and control parameters. Within the technological boundaries of the experimental set-up, it is shown that the accuracy requirements can be met and that milling is superior to drilling. It is expected that robot assisted navigated surgery helps to improve the reliability of surgical procedures. Further experiments are necessary to take the whole workflow into account. Copyright 2006 John Wiley & Sons, Ltd.
Impact parameter smearing effects on isospin sensitive observables in heavy ion collisions
NASA Astrophysics Data System (ADS)
Li, Li; Zhang, Yingxun; Li, Zhuxia; Wang, Nan; Cui, Ying; Winkelbauer, Jack
2018-04-01
The validity of impact parameter estimation from the multiplicity of charged particles at low-intermediate energies is checked within the framework of the improved quantum molecular dynamics model. The simulations show that the multiplicity of charged particles cannot estimate the impact parameter of heavy ion collisions very well, especially for central collisions at the beam energies lower than ˜70 MeV/u due to the large fluctuations of the multiplicity of charged particles. The simulation results for the central collisions defined by the charged particle multiplicity are compared to those by using impact parameter b =2 fm and it shows that the charge distribution for 112Sn+112Sn at the beam energy of 50 MeV/u is different evidently for two cases; and the chosen isospin sensitive observable, the coalescence invariant single neutron to proton yield ratio, reduces less than 15% for neutron-rich systems Sn,132124+124Sn at Ebeam=50 MeV/u, while the coalescence invariant double neutron to proton yield ratio does not have obvious difference. The sensitivity of the chosen isospin sensitive observables to effective mass splitting is studied for central collisions defined by the multiplicity of charged particles. Our results show that the sensitivity is enhanced for 132Sn+124Sn relative to that for 124Sn+124Sn , and this reaction system should be measured in future experiments to study the effective mass splitting by heavy ion collisions.
Potential accuracy of methods of laser Doppler anemometry in the single-particle scattering mode
NASA Astrophysics Data System (ADS)
Sobolev, V. S.; Kashcheeva, G. A.
2017-05-01
Potential accuracy of methods of laser Doppler anemometry is determined for the singleparticle scattering mode where the only disturbing factor is shot noise generated by the optical signal itself. The problem is solved by means of computer simulations with the maximum likelihood method. The initial parameters of simulations are chosen to be the number of real or virtual interference fringes in the measurement volume of the anemometer, the signal discretization frequency, and some typical values of the signal/shot noise ratio. The parameters to be estimated are the Doppler frequency as the basic parameter carrying information about the process velocity, the signal amplitude containing information about the size and concentration of scattering particles, and the instant when the particles arrive at the center of the measurement volume of the anemometer, which is needed for reconstruction of the examined flow velocity as a function of time. The estimates obtained in this study show that shot noise produces a minor effect (0.004-0.04%) on the frequency determination accuracy in the entire range of chosen values of the initial parameters. For the signal amplitude and the instant when the particles arrive at the center of the measurement volume of the anemometer, the errors induced by shot noise are in the interval of 0.2-3.5%; if the number of interference fringes is sufficiently large (more than 20), the errors do not exceed 0.2% regardless of the shot noise level.
The Visi-Chroma VC-100: a new imaging colorimeter for dermatocosmetic research.
Barel, A O; Clarys, P; Alewaeters, K; Duez, C; Hubinon, J L; Mommaerts, M
2001-02-01
It was the aim of this study to carry out a comparative evaluation in vitro on standardized color charts and in vivo on healthy subjects using the Visi-Chroma VC-100, a new imaging tristimulus colorimeter and the Minolta Chromameter CR-200 as a reference instrument. The Visi-Chroma combines tristimulus color analysis with full color visualization of the skin area measured. The technical performances of both instruments were compared with the purpose of validating the use of this new imaging colorimeter in dermatocosmetic research. In vitro L*a*b* color parameters were taken with both instruments on standardized color charts (Macbeth and RAL charts) in order to evaluate accuracy, sensitivity range and repeatability. These measurements were completed by in vivo studies on different sites of human skin and studies of color changes induced by topical chemical agents on forearm skin. The accuracy, sensitivity range and repeatability of measurements of selected distances and surfaces in the measuring zone considered and specific color determinations of specific skin zones were also determined. The technical performance of this imaging colorimeter was rather good, with low coefficients of variation for repeatability of in vitro and vivo color measurements. High positive correlations were established in vitro and in vivo over a wide range of color measurements. The imaging colorimeter was able to measure the L*a*b* color parameters of specific chosen parts of the skin area considered and to measure accurately selected distances and surfaces in the same skin site considered. These comparative measurements show that both instruments have very similar technical performances and that high levels of correlation were obtained in vitro and in vivo using the L*a*b* color parameters. In addition, the Visi-Chroma presents the following improvements: 1) direct visualization and recording of the skin area considered with concomitant color measurements; 2) determination of the specific color parameters of skin areas chosen in the total measuring area; and 3) accurate determination of selected distances and surfaces in the same skin areas chosen.
Petersen, Nanna; Stocks, Stuart; Gernaey, Krist V
2008-05-01
The main purpose of this article is to demonstrate that principal component analysis (PCA) and partial least squares regression (PLSR) can be used to extract information from particle size distribution data and predict rheological properties. Samples from commercially relevant Aspergillus oryzae fermentations conducted in 550 L pilot scale tanks were characterized with respect to particle size distribution, biomass concentration, and rheological properties. The rheological properties were described using the Herschel-Bulkley model. Estimation of all three parameters in the Herschel-Bulkley model (yield stress (tau(y)), consistency index (K), and flow behavior index (n)) resulted in a large standard deviation of the parameter estimates. The flow behavior index was not found to be correlated with any of the other measured variables and previous studies have suggested a constant value of the flow behavior index in filamentous fermentations. It was therefore chosen to fix this parameter to the average value thereby decreasing the standard deviation of the estimates of the remaining rheological parameters significantly. Using a PLSR model, a reasonable prediction of apparent viscosity (micro(app)), yield stress (tau(y)), and consistency index (K), could be made from the size distributions, biomass concentration, and process information. This provides a predictive method with a high predictive power for the rheology of fermentation broth, and with the advantages over previous models that tau(y) and K can be predicted as well as micro(app). Validation on an independent test set yielded a root mean square error of 1.21 Pa for tau(y), 0.209 Pa s(n) for K, and 0.0288 Pa s for micro(app), corresponding to R(2) = 0.95, R(2) = 0.94, and R(2) = 0.95 respectively. Copyright 2007 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
Lai, Zhi-Hui; Leng, Yong-Gang
2015-01-01
A two-dimensional Duffing oscillator which can produce stochastic resonance (SR) is studied in this paper. We introduce its SR mechanism and present a generalized parameter-adjusted SR (GPASR) model of this oscillator for the necessity of parameter adjustments. The Kramers rate is chosen as the theoretical basis to establish a judgmental function for judging the occurrence of SR in this model; and to analyze and summarize the parameter-adjusted rules under unmatched signal amplitude, frequency, and/or noise-intensity. Furthermore, we propose the weak-signal detection approach based on this GPASR model. Finally, we employ two practical examples to demonstrate the feasibility of the proposed approach in practical engineering application. PMID:26343671
Effect of diffuser vane shape on the performance of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Reddy, T. Ch Siva; Ramana Murty, G. V.; Prasad, M. V. S. S. S. M.
2014-04-01
The present paper reports the results of experimental investigations on the effect of diffuser vane shape on the performance of a centrifugal compressor stage. These studies were conducted on the chosen stage having a backward curved impeller of 500 mm tip diameter and 24.5 mm width and its design flow coefficient is ϕd=0.0535. Three different low solidity diffuser vane shapes namely uncambered aerofoil, constant thickness flat plate and circular arc cambered constant thickness plate were chosen as the variants for diffuser vane shape and all the three shapes have the same thickness to chord ratio (t/c=0.1). Flow coefficient, polytropic efficiency, total head coefficient, power coefficient and static pressure recovery coefficient were chosen as the parameters for evaluating the effect of diffuser vane shape on the stage performance. The results show that there is reasonable improvement in stage efficiency and total head coefficient with the use of the chosen diffuser vane shapes as compared to conventional vaneless diffuser. It is also noticed that the aero foil shaped LSD has shown better performance when compared to flat plate and circular arc profiles. The aerofoil vane shape of the diffuser blade is seen to be tolerant over a considerable range of incidence.
Opalko, K; Dojs, A
2006-01-01
The aim of the work was to use and to evaluate the usefulness of the slow variable magnetic fields to aid the treatment of the teeth chosen for extraction. The marginal paradontium of periapical bone of teeth was in a state of extensive destruction. The teeth were chosen for extraction. 13 patients were chosen. 10 of them had with endo-perio changes and 3 suffered from full tooth luxation and had the teeth replanted. Those people were to have an extraction procedure or were declared as impossible to treat in other dental offices. Patients underwent non-aggressive skaling, endodontic treatment and were exposed to slow variable magnetic fields generated by Viofor JPS, accordingly to methods and parameters suggested by Department of Propaedeutics in Dentistry of Pomeranian Medical University in Szczecin. The process of healing of changes was evaluated radiologically. RTG done after 2 weeks and after 2 months were evaluated in respect of bone regeneration. They show the bone structure concentration. A RTG evaluation after half a year, two and three years show a preservation of the bone structure concentration. The use of slow variable magnetic fields contributed to bone structure regeneration and to preserve teeth with recorded endo-perio syndrome. Endodontic treatment of replanted teeth, aided with magnetostimulation has stopped the osteolisis process.
ERIC Educational Resources Information Center
Angoff, William H.; Modu, Christopher C.
The purpose of this study was to establish score equivalencies between the College Board Scholastic Aptitude Test (SAT) and its Spanish language equivalent, the College Board Prueba de Aptitud Academica (PAA). For the first phase, two sets of items, one originally appearing in Spanish and the other in English, were chosen; and each set was…
Evaluating the Predictivity of Virtual Screening for Abl Kinase Inhibitors to Hinder Drug Resistance
Gani, Osman A B S M; Narayanan, Dilip; Engh, Richard A
2013-01-01
Virtual screening methods are now widely used in early stages of drug discovery, aiming to rank potential inhibitors. However, any practical ligand set (of active or inactive compounds) chosen for deriving new virtual screening approaches cannot fully represent all relevant chemical space for potential new compounds. In this study, we have taken a retrospective approach to evaluate virtual screening methods for the leukemia target kinase ABL1 and its drug-resistant mutant ABL1-T315I. ‘Dual active’ inhibitors against both targets were grouped together with inactive ligands chosen from different decoy sets and tested with virtual screening approaches with and without explicit use of target structures (docking). We show how various scoring functions and choice of inactive ligand sets influence overall and early enrichment of the libraries. Although ligand-based methods, for example principal component analyses of chemical properties, can distinguish some decoy sets from active compounds, the addition of target structural information via docking improves enrichment, and explicit consideration of multiple target conformations (i.e. types I and II) achieves best enrichment of active versus inactive ligands, even without assuming knowledge of the binding mode. We believe that this study can be extended to other therapeutically important kinases in prospective virtual screening studies. PMID:23746052
Bürger, R; Gimelfarb, A
1999-01-01
Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920
Object-oriented classification of drumlins from digital elevation models
NASA Astrophysics Data System (ADS)
Saha, Kakoli
Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.
[Evaluation of the capacity of work using upper limbs after radical latero-cervical surgery].
Capodaglio, P; Strada, M R; Grilli, C; Lodola, E; Panigazzi, M; Bernardo, G; Bazzini, G
1998-01-01
Evaluation of arm work capacity after radical neck surgery. The aim of this paper is to describe an approach for the assessment of work capacity in patients who underwent radical neck surgery, including those treated with radiation therapy. Nine male patients, who underwent radical neck surgery 2 months before being referred to our Unit, participated in the study. In addition to manual muscle strength test, we performed the following functional evaluations: 0-100 Constant scale for shoulder function; maximal shoulder strength in adduction/abduction and intrarotation/extrarotation; instrumental. We measured maximal isokinetic strength (10 repetitions) with a computerized dynamometer (Lido WorkSET) set at 100 degrees/sec. During the rehabilitation phase, the patients' mechanical parameters, the perception of effort, pain or discomfort, and the range of movement were monitored while performing daily/occupational task individually chosen on the simulator (Lido WorkSET) under isotonic conditions. On this basis, patients were encouraged to return to levels of daily physical activities compatible with the individual tolerable work load. The second evaluation at 2 month confirmed that the integrated rehabilitation protocol successfully increased patients' capacities and "trust" in their physical capacity. According to the literature, the use of isokinetic and isotonic exercise programs appears to decrease shoulder rehabilitation time. In our experience an excellent compliance has been noted. One of the advantages of the method proposed is to provide quantitative reports of the functional capacity and therefore to facilitate return-to-work of patients who underwent radical neck surgery.
Parameterization of DFTB3/3OB for Sulfur and Phosphorus for Chemical and Biological Applications
2015-01-01
We report the parametrization of the approximate density functional tight binding method, DFTB3, for sulfur and phosphorus. The parametrization is done in a framework consistent with our previous 3OB set established for O, N, C, and H, thus the resulting parameters can be used to describe a broad set of organic and biologically relevant molecules. The 3d orbitals are included in the parametrization, and the electronic parameters are chosen to minimize errors in the atomization energies. The parameters are tested using a fairly diverse set of molecules of biological relevance, focusing on the geometries, reaction energies, proton affinities, and hydrogen bonding interactions of these molecules; vibrational frequencies are also examined, although less systematically. The results of DFTB3/3OB are compared to those from DFT (B3LYP and PBE), ab initio (MP2, G3B3), and several popular semiempirical methods (PM6 and PDDG), as well as predictions of DFTB3 with the older parametrization (the MIO set). In general, DFTB3/3OB is a major improvement over the previous parametrization (DFTB3/MIO), and for the majority cases tested here, it also outperforms PM6 and PDDG, especially for structural properties, vibrational frequencies, hydrogen bonding interactions, and proton affinities. For reaction energies, DFTB3/3OB exhibits major improvement over DFTB3/MIO, due mainly to significant reduction of errors in atomization energies; compared to PM6 and PDDG, DFTB3/3OB also generally performs better, although the magnitude of improvement is more modest. Compared to high-level calculations, DFTB3/3OB is most successful at predicting geometries; larger errors are found in the energies, although the results can be greatly improved by computing single point energies at a high level with DFTB3 geometries. There are several remaining issues with the DFTB3/3OB approach, most notably its difficulty in describing phosphate hydrolysis reactions involving a change in the coordination number of the phosphorus, for which a specific parametrization (3OB/OPhyd) is developed as a temporary solution; this suggests that the current DFTB3 methodology has limited transferability for complex phosphorus chemistry at the level of accuracy required for detailed mechanistic investigations. Therefore, fundamental improvements in the DFTB3 methodology are needed for a reliable method that describes phosphorus chemistry without ad hoc parameters. Nevertheless, DFTB3/3OB is expected to be a competitive QM method in QM/MM calculations for studying phosphorus/sulfur chemistry in condensed phase systems, especially as a low-level method that drives the sampling in a dual-level QM/MM framework. PMID:24803865
A photoelectric skylight polarimeter.
Hariharan, T A; Sekera, Z
1966-09-01
A photoelectric skylight polarimeter to measure directly the Stokes parameters for plane polarized light is described. The basic principle of the instrument consists in the simultaneous measurement of the intensity of light (in the chosen spectral region) transmitted by polarizers oriented in four specific directions. The main features and performance characteristics of the instrument are briefly discussed.
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Four-parameter model for polarization-resolved rough-surface BRDF.
Renhorn, Ingmar G E; Hallberg, Tomas; Bergström, David; Boreman, Glenn D
2011-01-17
A modeling procedure is demonstrated, which allows representation of polarization-resolved BRDF data using only four parameters: the real and imaginary parts of an effective refractive index with an added parameter taking grazing incidence absorption into account and an angular-scattering parameter determined from the BRDF measurement of a chosen angle of incidence, preferably close to normal incidence. These parameters allow accurate predictions of s- and p-polarized BRDF for a painted rough surface, over three decades of variation in BRDF magnitude. To characterize any particular surface of interest, the measurements required to determine these four parameters are the directional hemispherical reflectance (DHR) for s- and p-polarized input radiation and the BRDF at a selected angle of incidence. The DHR data describes the angular and polarization dependence, as well as providing the overall normalization constraint. The resulting model conserves energy and fulfills the reciprocity criteria.
Li, Baoyue; Lingsma, Hester F; Steyerberg, Ewout W; Lesaffre, Emmanuel
2011-05-23
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. We used individual patient data from 8509 patients in 231 centers with moderate and severe Traumatic Brain Injury (TBI) enrolled in eight Randomized Controlled Trials (RCTs) and three observational studies. We fitted logistic random effects regression models with the 5-point Glasgow Outcome Scale (GOS) as outcome, both dichotomized as well as ordinal, with center and/or trial as random effects, and as covariates age, motor score, pupil reactivity or trial. We then compared the implementations of frequentist and Bayesian methods to estimate the fixed and random effects. Frequentist approaches included R (lme4), Stata (GLLAMM), SAS (GLIMMIX and NLMIXED), MLwiN ([R]IGLS) and MIXOR, Bayesian approaches included WinBUGS, MLwiN (MCMC), R package MCMCglmm and SAS experimental procedure MCMC.Three data sets (the full data set and two sub-datasets) were analysed using basically two logistic random effects models with either one random effect for the center or two random effects for center and trial. For the ordinal outcome in the full data set also a proportional odds model with a random center effect was fitted. The packages gave similar parameter estimates for both the fixed and random effects and for the binary (and ordinal) models for the main study and when based on a relatively large number of level-1 (patient level) data compared to the number of level-2 (hospital level) data. However, when based on relatively sparse data set, i.e. when the numbers of level-1 and level-2 data units were about the same, the frequentist and Bayesian approaches showed somewhat different results. The software implementations differ considerably in flexibility, computation time, and usability. There are also differences in the availability of additional tools for model evaluation, such as diagnostic plots. The experimental SAS (version 9.2) procedure MCMC appeared to be inefficient. On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.
The Role of Food Selection in Swedish Home Economics: The Educational Visions and Cultural Meaning.
Höijer, Karin; Hjälmeskog, Karin; Fjellström, Christina
2014-01-01
This article explores foods talked about and chosen in the education of Swedish Home Economics as a relationship between structural processes and agency. Three data sets from observations and focus group interviews with teachers and students were analyzed for food classifications. These were related to a culinary triangle of contradictions, showing factors of identity, convenience and responsibility. Results show that foods talked about and chosen by teachers and students were reflections of dominant cultural values. Results also indicate that teachers had more agency than students, but that the choices they made were framed by educational visions and cultural values.
Numerical Solutions for Supersonic Flow of an Ideal Gas Around Blunt Two-Dimensional Bodies
NASA Technical Reports Server (NTRS)
Fuller, Franklyn B.
1961-01-01
The method described is an inverse one; the shock shape is chosen and the solution proceeds downstream to a body. Bodies blunter than circular cylinders are readily accessible, and any adiabatic index can be chosen. The lower limit to the free-stream Mach number available in any case is determined by the extent of the subsonic field, which in turn depends upon the body shape. Some discussion of the stability of the numerical processes is given. A set of solutions for flows about circular cylinders at several Mach numbers and several values of the adiabatic index is included.
NASA Astrophysics Data System (ADS)
Maiti, Amitesh; McGrother, Simon
2004-01-01
Dissipative particle dynamics (DPD) is a mesoscale modeling method for simulating equilibrium and dynamical properties of polymers in solution. The basic idea has been around for several decades in the form of bead-spring models. A few years ago, Groot and Warren [J. Chem. Phys. 107, 4423 (1997)] established an important link between DPD and the Flory-Huggins χ-parameter theory for polymer solutions. We revisit the Groot-Warren theory and investigate the DPD interaction parameters as a function of bead size. In particular, we show a consistent scheme of computing the interfacial tension in a segregated binary mixture. Results for three systems chosen for illustration are in excellent agreement with experimental results. This opens the door for determining DPD interactions using interfacial tension as a fitting parameter.
Using aerial images for establishing a workflow for the quantification of water management measures
NASA Astrophysics Data System (ADS)
Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg
2017-04-01
Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.
Musik, Irena; Kocot, Joanna; Kiełczykowska, Małgorzata
2015-06-01
Selenium is an essential element of antioxidant properties. Lithium is widely used in medicine but its administration can cause numerous side effects including oxidative stress. The present study aimed at evaluating if sodium selenite could influence chosen anti- and pro-oxidant parameters in rats treated with lithium. The experiment was performed on four groups of Wistar rats: I (control) - treated with saline; II (Li) - treated with lithium (2.7 mgLi/kg b.w. as Li2CO3), III (Se) - treated with selenium (0.5 mgSe/kg b.w. as Na2SeO3), IV (Li+Se) - treated with Li2CO3 and Na2SeO3 together at the same doses as in group II and III, respectively. All treatments were performed by stomach tube for three weeks in form of water solutions. The following anti- and pro-oxidant parameters: total antioxidant status (TAS) value, catalase (CAT) activity, concentrations of ascorbic acid (AA) and malonyldialdehyde (MDA) in plasma as well as whole blood superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured. Selenium given alone markedly enhanced whole blood GPx and diminished plasma CAT vs. Lithium significantly decreased plasma CAT and slightly increased AA vs. Selenium co-administration restored these parameters to the values observed in control animals. Furthermore, selenium co-administration significantly increased GPx in Li-treated rats. All other parameters (TAS, SOD and MDA) were not affected by lithium and/or selenium. Further research seems to be warranted to decide if application of selenium as an adjuvant in lithium therapy is worth considering. Copyright © 2014 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
NASA Astrophysics Data System (ADS)
Mann, Kulwinder Singh; Heer, Manmohan Singh; Rani, Asha
2016-07-01
The gamma-ray shielding behaviour of a material can be investigated by determining its various interaction and energy-absorption parameters (such as mass attenuation coefficients, mass energy absorption coefficients, and corresponding effective atomic numbers and electron densities). Literature review indicates that the effective atomic number (Zeff) has been used as extensive parameters for evaluating the effects and defect in the chosen materials caused by ionising radiations (X-rays and gamma-rays). A computer program (Zeff-toolkit) has been designed for obtaining the mean value of effective atomic number calculated by three different methods. A good agreement between the results obtained with Zeff-toolkit, Auto_Zeff software and experimentally measured values of Zeff has been observed. Although the Zeff-toolkit is capable of computing effective atomic numbers for both photon interaction (Zeff,PI) and energy absorption (Zeff,En) using three methods in each. No similar computer program is available in the literature which simultaneously computes these parameters simultaneously. The computed parameters have been compared and correlated in the wide energy range (0.001-20 MeV) for 10 commonly used building materials. The prominent variations in these parameters with gamma-ray photon energy have been observed due to the dominance of various absorption and scattering phenomena. The mean values of two effective atomic numbers (Zeff,PI and Zeff,En) are equivalent at energies below 0.002 MeV and above 0.3 MeV, indicating the dominance of gamma-ray absorption (photoelectric and pair production) over scattering (Compton) at these energies. Conversely in the energy range 0.002-0.3 MeV, the Compton scattering of gamma-rays dominates the absorption. From the 10 chosen samples of building materials, 2 soils showed better shielding behaviour than did other 8 materials.
Impact of Vial Capping on Residual Seal Force and Container Closure Integrity.
Mathaes, Roman; Mahler, Hanns-Christian; Roggo, Yves; Ovadia, Robert; Lam, Philippe; Stauch, Oliver; Vogt, Martin; Roehl, Holger; Huwyler, Joerg; Mohl, Silke; Streubel, Alexander
2016-01-01
The vial capping process is a critical unit operation during drug product manufacturing, as it could possibly generate cosmetic defects or even affect container closure integrity. Yet there is significant variability in capping equipment and processes, and their relation to potential defects or container closure integrity has not been thoroughly studied. In this study we applied several methods-residual seal force tester, a self-developed system of a piezo force sensor measurement, and computed tomography-to characterize different container closure system combinations that had been sealed using different capping process parameter settings. Additionally, container closure integrity of these samples was measured using helium leakage (physical container closure integrity) and compared to characterization data. The different capping equipment settings lead to residual seal force values from 7 to 115 N. High residual seal force values were achieved with high capping pre-compression force and a short distance between the capping plate and plunge. The choice of container closure system influenced the obtained residual seal force values. The residual seal force tester and piezoelectric measurements showed similar trends. All vials passed physical container closure integrity testing, and no stopper rupture was seen with any of the settings applied, suggesting that container closure integrity was warranted for the studied container closure system with the chosen capping setting ranges. The vial capping process is a critical unit operation during drug product manufacturing, as it could possibly generate cosmetic defects or even affect container closure integrity. Yet there is significant variability in capping equipment and processes, and their relation to potential defects or container closure integrity has not been thoroughly studied. In this study we applied several methods-residual seal force tester, a self-developed system of a piezo force sensor measurement, and computed tomography-to characterize different container closure system combinations that had been sealed using different capping process parameter settings. The residual seal force tester can analyze a variety of different container closure systems independent of the capping equipment. An adequate and safe residual seal force range for each container closure system configuration can be established with the residual seal force tester and additional methods like computed tomography scans and leak testing. In the residual seal force range studied, the physical container closure integrity of the container closure system was warranted. © PDA, Inc. 2016.
Analysis of the influence of handset phone position on RF exposure of brain tissue.
Ghanmi, Amal; Varsier, Nadège; Hadjem, Abdelhamid; Conil, Emmanuelle; Picon, Odile; Wiart, Joe
2014-12-01
Exposure to mobile phone radio frequency (RF) electromagnetic fields depends on many different parameters. For epidemiological studies investigating the risk of brain cancer linked to RF exposure from mobile phones, it is of great interest to characterize brain tissue exposure and to know which parameters this exposure is sensitive to. One such parameter is the position of the phone during communication. In this article, we analyze the influence of the phone position on the brain exposure by comparing the specific absorption rate (SAR) induced in the head by two different mobile phone models operating in Global System for Mobile Communications (GSM) frequency bands. To achieve this objective, 80 different phone positions were chosen using an experiment based on the Latin hypercube sampling (LHS) to select a representative set of positions. The averaged SAR over 10 g (SAR10 g) in the head, the averaged SAR over 1 g (SAR1 g ) in the brain, and the averaged SAR in different anatomical brain structures were estimated at 900 and 1800 MHz for the 80 positions. The results illustrate that SAR distributions inside the brain area are sensitive to the position of the mobile phone relative to the head. The results also show that for 5-10% of the studied positions the SAR10 g in the head and the SAR1 g in the brain can be 20% higher than the SAR estimated for the standard cheek position and that the Specific Anthropomorphic Mannequin (SAM) model is conservative for 95% of all the studied positions. © 2014 Wiley Periodicals, Inc.
The use of nomograms in LDR-HDR prostate brachytherapy.
Pujades, Ma Carmen; Camacho, Cristina; Perez-Calatayud, Jose; Richart, José; Gimeno, Jose; Lliso, Françoise; Carmona, Vicente; Ballester, Facundo; Crispín, Vicente; Rodríguez, Silvia; Tormo, Alejandro
2011-09-01
The common use of nomograms in Low Dose Rate (LDR) permanent prostate brachytherapy (BT) allows to estimate the number of seeds required for an implant. Independent dosimetry verification is recommended for each clinical dosimetry in BT. Also, nomograms can be useful for dose calculation quality assurance and they could be adapted to High Dose Rate (HDR). This work sets nomograms for LDR and HDR prostate-BT implants, which are applied to three different institutions that use different implant techniques. Patients treated throughout 2010 till April 2011 were considered for this study. This example was chosen to be the representative of the latest implant techniques and to ensure consistency in the planning. A sufficient number of cases for both BT modalities, prescription dose and different work methodology (depending on the institution) were taken into account. The specific nomograms were built using the correlation between the prostate volume and some characteristic parameters of each BT modality, such as the source Air Kerma Strength, number of implanted seeds in LDR or total radiation time in HDR. For each institution and BT modality, nomograms normalized to the prescribed dose were obtained and fitted to a linear function. The parameters of the adjustment show a good agreement between data and the fitting. It should be noted that for each institution these linear function parameters are different, indicating that each centre should construct its own nomograms. Nomograms for LDR and HDR prostate brachytherapy are simple quality assurance tools, specific for each institution. Nevertheless, their use should be complementary to the necessary independent verification.
The use of nomograms in LDR-HDR prostate brachytherapy
Camacho, Cristina; Perez-Calatayud, Jose; Richart, José; Gimeno, Jose; Lliso, Françoise; Carmona, Vicente; Ballester, Facundo; Crispín, Vicente; Rodríguez, Silvia; Tormo, Alejandro
2011-01-01
Purpose The common use of nomograms in Low Dose Rate (LDR) permanent prostate brachytherapy (BT) allows to estimate the number of seeds required for an implant. Independent dosimetry verification is recommended for each clinical dosimetry in BT. Also, nomograms can be useful for dose calculation quality assurance and they could be adapted to High Dose Rate (HDR). This work sets nomograms for LDR and HDR prostate-BT implants, which are applied to three different institutions that use different implant techniques. Material and methods Patients treated throughout 2010 till April 2011 were considered for this study. This example was chosen to be the representative of the latest implant techniques and to ensure consistency in the planning. A sufficient number of cases for both BT modalities, prescription dose and different work methodology (depending on the institution) were taken into account. The specific nomograms were built using the correlation between the prostate volume and some characteristic parameters of each BT modality, such as the source Air Kerma Strength, number of implanted seeds in LDR or total radiation time in HDR. Results For each institution and BT modality, nomograms normalized to the prescribed dose were obtained and fitted to a linear function. The parameters of the adjustment show a good agreement between data and the fitting. It should be noted that for each institution these linear function parameters are different, indicating that each centre should construct its own nomograms. Conclusions Nomograms for LDR and HDR prostate brachytherapy are simple quality assurance tools, specific for each institution. Nevertheless, their use should be complementary to the necessary independent verification. PMID:23346120
Reduction of eddy current losses in inductive transmission systems with ferrite sheets.
Maaß, Matthias; Griessner, Andreas; Steixner, Viktor; Zierhofer, Clemens
2017-01-05
Improvements in eddy current suppression are necessary to meet the demand for increasing miniaturization of inductively driven transmission systems in industrial and biomedical applications. The high magnetic permeability and the simultaneously low electrical conductivity of ferrite materials make them ideal candidates for shielding metallic surfaces. For systems like cochlear implants the transmission of data as well as energy over an inductive link is conducted within a well-defined parameter set. For these systems, the shielding can be of particular importance if the properties of the link can be preserved. In this work, we investigate the effect of single and double-layered substrates consisting of ferrite and/or copper on the inductance and coupling of planar spiral coils. The examined link systems represent realistic configurations for active implantable systems such as cochlear implants. Experimental measurements are complemented with analytical calculations and finite element simulations, which are in good agreement for all measured parameters. The results are then used to study the transfer efficiency of an inductive link in a series-parallel resonant topology as a function of substrate size, the number of coil turns and coil separation. We find that ferrite sheets can be used to shield the system from unwanted metallic surfaces and to retain the inductive link parameters of the unperturbed system, particularly its transfer efficiency. The required size of the ferrite plates is comparable to the size of the coils, which makes the setup suitable for practical implementations. Since the sizes and geometries chosen for the studied inductive links are comparable to those of cochlear implants, our conclusions apply in particular to these systems.
21SSD: a public data base of simulated 21-cm signals from the epoch of reionization
NASA Astrophysics Data System (ADS)
Semelin, B.; Eames, E.; Bolgar, F.; Caillat, M.
2017-12-01
The 21-cm signal from the epoch of reionization (EoR) is expected to be detected in the next few years, either with existing instruments or by the upcoming SKA and HERA projects. In this context, there is a pressing need for publicly available high-quality templates covering a wide range of possible signals. These are needed both for end-to-end simulations of the up-coming instruments and to develop signal analysis methods. We present such a set of templates, publicly available, for download at 21ssd.obspm.fr. The data base contains 21-cm brightness temperature lightcones at high and low resolution, and several derived statistical quantities for 45 models spanning our choice of 3D parameter space. These data are the result of fully coupled radiative hydrodynamic high-resolution (10243) simulations performed with the LICORICE code. Both X-ray and Lyman line transfer are performed to account for heating and Wouthuysen-Field coupling fluctuations. We also present a first exploitation of the data using the power spectrum and the pixel distribution function (PDF) computed from lightcone data. We analyse how these two quantities behave when varying the model parameters while taking into account the thermal noise expected of a typical SKA survey. Finally, we show that the noiseless power spectrum and PDF have different - and somewhat complementary - abilities to distinguish between different models. This preliminary result will have to be expanded to the case including thermal noise. This type of results opens the door to formulating an optimal sampling of the parameter space, dependent on the chosen diagnostics.
On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems
NASA Astrophysics Data System (ADS)
Lastra, A.; Malek, S.
2015-11-01
We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.
Priority setting and economic appraisal: whose priorities--the community or the economist?
Green, A; Barker, C
1988-01-01
Scarce resources for health require a process for setting priorities. The exact mechanism chosen has important implications for the type of priorities and plans set, and in particular their relationship to the principles of primary health care. One technique increasingly advocated as an aid to priority setting is economic appraisal. It is argued however that economic appraisal is likely to reinforce a selective primary health care approach through its espousal of a technocratic medical model and through its hidden but implicit value judgements. It is suggested that urgent attention is needed to develop approaches to priority setting that incorporate the strengths of economic appraisal, but that are consistent with comprehensive primary health care.
Metallodielectrics as Metamaterials
2010-01-01
found in nature. Associated effects include negative refraction, negative phase accumulation along a path, and superresolution . Superresolu- tion, first...mid-wave IR regime and beyond with carefully chosen design parameters. The suggestion that metal films alone could demonstrate superresolution in the...our interest to achieve superresolution using MDs that would overcome the drawbacks of pure metal films, with opacity chief among them. Our calculations
ERIC Educational Resources Information Center
Zacharos, Konstantinos; Koustourakis, Gerassimos
2011-01-01
The reference contexts that accompany the "realistic" problems chosen for teaching mathematical concepts in the first school grades play a major educational role. However, choosing "realistic" problems in teaching is a complex process that must take into account various pedagogical, sociological and psychological parameters.…
A Vernacular for Linear Latent Growth Models
ERIC Educational Resources Information Center
Hancock, Gregory R.; Choi, Jaehwa
2006-01-01
In its most basic form, latent growth modeling (latent curve analysis) allows an assessment of individuals' change in a measured variable X over time. For simple linear models, as with other growth models, parameter estimates associated with the a construct (amount of X at a chosen temporal reference point) and b construct (growth in X per unit…
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Pierce, M L; Ruffner, D E
1998-01-01
Antisense-mediated gene inhibition uses short complementary DNA or RNA oligonucleotides to block expression of any mRNA of interest. A key parameter in the success or failure of an antisense therapy is the identification of a suitable target site on the chosen mRNA. Ultimately, the accessibility of the target to the antisense agent determines target suitability. Since accessibility is a function of many complex factors, it is currently beyond our ability to predict. Consequently, identification of the most effective target(s) requires examination of every site. Towards this goal, we describe a method to construct directed ribozyme libraries against any chosen mRNA. The library contains nearly equal amounts of ribozymes targeting every site on the chosen transcript and the library only contains ribozymes capable of binding to that transcript. Expression of the ribozyme library in cultured cells should allow identification of optimal target sites under natural conditions, subject to the complexities of a fully functional cell. Optimal target sites identified in this manner should be the most effective sites for therapeutic intervention. PMID:9801305
Electromagnetic frozen waves with radial, azimuthal, linear, circular, and elliptical polarizations
NASA Astrophysics Data System (ADS)
Corato-Zanarella, Mateus; Zamboni-Rached, Michel
2016-11-01
Frozen waves (FWs) are a class of diffraction- and attenuation-resistant beams whose intensity pattern along the direction of propagation can be chosen arbitrarily, thus making them relevant for engineering the spatial configuration of optical fields. To date, analyses of such beams have been done essentially for the scalar case, with the vectorial nature of the electromagnetic fields often neglected. Although it is expected that the field components keep the fundamental properties of the scalar FWs, a deeper understanding of their electromagnetic counterparts is mandatory in order to exploit their different possible polarization states. The purpose of this paper is to study the properties of electromagnetic FWs with radial, azimuthal, linear, circular, and elliptical polarizations under paraxial and nonparaxial regimes in nonabsorbing media. An intensity pattern is chosen for a scalar FW, and the vectorial solutions are built after it via the use of Maxwell's equations. The results show that the field components and the longitudinal component of the time-averaged Poynting vector closely follow the pattern chosen even under highly nonparaxial conditions, showing the robustness of the FW structure to parameters variations.
Assessment of environments for Mars Science Laboratory entry, descent, and surface operations
Vasavada, Ashwin R.; Chen, Allen; Barnes, Jeffrey R.; Burkhart, P. Daniel; Cantor, Bruce A.; Dwyer-Cianciolo, Alicia M.; Fergason, Robini L.; Hinson, David P.; Justh, Hilary L.; Kass, David M.; Lewis, Stephen R.; Mischna, Michael A.; Murphy, James R.; Rafkin, Scot C.R.; Tyler, Daniel; Withers, Paul G.
2012-01-01
The Mars Science Laboratory mission aims to land a car-sized rover on Mars' surface and operate it for at least one Mars year in order to assess whether its field area was ever capable of supporting microbial life. Here we describe the approach used to identify, characterize, and assess environmental risks to the landing and rover surface operations. Novel entry, descent, and landing approaches will be used to accurately deliver the 900-kg rover, including the ability to sense and "fly out" deviations from a best-estimate atmospheric state. A joint engineering and science team developed methods to estimate the range of potential atmospheric states at the time of arrival and to quantitatively assess the spacecraft's performance and risk given its particular sensitivities to atmospheric conditions. Numerical models are used to calculate the atmospheric parameters, with observations used to define model cases, tune model parameters, and validate results. This joint program has resulted in a spacecraft capable of accessing, with minimal risk, the four finalist sites chosen for their scientific merit. The capability to operate the landed rover over the latitude range of candidate landing sites, and for all seasons, was verified against an analysis of surface environmental conditions described here. These results, from orbital and model data sets, also drive engineering simulations of the rover's thermal state that are used to plan surface operations.
NASA Astrophysics Data System (ADS)
Schmid, T.; López-Martínez, J.; Guillaso, S.; Serrano, E.; D'Hondt, O.; Koch, M.; Nieto, A.; O'Neill, T.; Mink, S.; Durán, J. J.; Maestro, A.
2017-09-01
Satellite-borne Synthetic Aperture Radar (SAR) has been used for characterizing and mapping in two relevant ice-free areas in the South Shetland Islands. The objective has been to identify and characterize land surface covers that mainly include periglacial and glacial landforms, using fully polarimetric SAR C band RADARSAT-2 data, on Fildes Peninsula that forms part of King George Island, and Ardley Island. Polarimetric parameters obtained from the SAR data, a selection of field based training and validation sites and a supervised classification approach, using the support vector machine were chosen to determine the spatial distribution of the different landforms. Eight periglacial and glacial landforms were characterized according to their scattering mechanisms using a set of 48 polarimetric parameters. The mapping of the most representative surface covers included colluvial deposits, stone fields and pavements, patterned ground, glacial till and rock outcrops, lakes and glacier ice. The overall accuracy of the results was estimated at 81%, a significant value when mapping areas that are within isolated regions where access is limited. Periglacial surface covers such as stone fields and pavements occupy 25% and patterned ground over 20% of the ice-free areas. These are results that form the basis for an extensive monitoring of the ice-free areas throughout the northern Antarctic Peninsula region.
NASA Astrophysics Data System (ADS)
Petrov, Dimitar; Cockmartin, Lesley; Marshall, Nicholas; Vancoillie, Liesbeth; Young, Kenneth; Bosmans, Hilde
2017-03-01
Digital breast tomosynthesis (DBT) is a relatively new 3D mammography technique that promises better detection of low contrast masses than conventional 2D mammography. The parameter space for DBT is large however and finding an optimal balance between dose and image quality remains challenging. Given the large number of conditions and images required in optimization studies, the use of human observers (HO) is time consuming and certainly not feasible for the tuning of all degrees of freedom. Our goal was to develop a model observer (MO) that could predict human detectability for clinically relevant details embedded within a newly developed structured phantom for DBT applications. DBT series were acquired on GE SenoClaire 3D, Giotto Class, Fujifilm AMULET Innovality and Philips MicroDose systems at different dose levels, Siemens Inspiration DBT acquisitions were reconstructed with different algorithms, while a larger set of DBT series was acquired on Hologic Dimensions system for first reproducibility testing. A channelized Hotelling observer (CHO) with Gabor channels was developed The parameters of the Gabor channels were tuned on all systems at standard scanning conditions and the candidate that produced the best fit for all systems was chosen. After tuning, the MO was applied to all systems and conditions. Linear regression lines between MO and HO scores were calculated, giving correlation coefficients between 0.87 and 0.99 for all tested conditions.
Earlier parental set bedtimes as a protective factor against depression and suicidal ideation.
Gangwisch, James E; Babiss, Lindsay A; Malaspina, Dolores; Turner, J Blake; Zammit, Gary K; Posner, Kelly
2010-01-01
To examine the relationships between parental set bedtimes, sleep duration, and depression as a quasi-experiment to explore the potentially bidirectional relationship between short sleep duration and depression. Short sleep duration has been shown to precede depression, but this could be explained as a prodromal symptom of depression. Depression in an adolescent can affect his/her chosen bedtime, but it is less likely to affect a parent's chosen set bedtime which can establish a relatively stable upper limit that can directly affect sleep duration. Multivariate cross-sectional analyses of the ADD Health using logistic regression. United States nationally representative, school-based, probability-based sample in 1994-96. Adolescents (n = 15,659) in grades 7 to 12. Adolescents with parental set bedtimes of midnight or later were 24% more likely to suffer from depression (OR = 1.24, 95% CI 1.04-1.49) and 20% more likely to have suicidal ideation (1.20, 1.01-1.41) than adolescents with parental set bedtimes of 10:00 PM or earlier, after controlling for covariates. Consistent with sleep duration and perception of getting enough sleep acting as mediators, the inclusion of these variables in the multivariate models appreciably attenuated the associations for depression (1.07, 0.88-1.30) and suicidal ideation (1.09, 0.92-1.29). The results from this study provide new evidence to strengthen the argument that short sleep duration could play a role in the etiology of depression. Earlier parental set bedtimes could therefore be protective against adolescent depression and suicidal ideation by lengthening sleep duration.
Development of acute tolerance to the EEG effect of propofol in rats.
Ihmsen, H; Schywalsky, M; Tzabazis, A; Schwilden, H
2005-09-01
A previous study in rats with propofol suggested the development of acute tolerance to the EEG effect. The aim of this study was to evaluate acute tolerance by means of EEG-controlled closed-loop anaesthesia as this approach allows precise determination of drug requirement to maintain a defined drug effect. Ten male Sprague-Dawley rats [weight 402 (40) g, mean (SD)] were included in the study. The EEG was recorded with occipito-occipital needle electrodes and a modified median frequency (mMEF) of the EEG power spectrum was used as a pharmacodynamic control parameter. The propofol infusion rate was controlled by a model-based adaptive algorithm to maintain a set point of mMEF=3 (0.5) Hz for 90 min. The performance of the closed-loop system was characterized by the prediction error PE=(mMEF-set point)/set point. Plasma propofol concentrations were determined from arterial samples by HPLC. The chosen set point was successfully maintained in all rats. The median (SE) and absolute median values of PE were -5.0 (0.3) and 11.3 (0.2)% respectively. Propofol concentration increased significantly from 2.9 (2.2) microg ml(-1) at the beginning to 5.8 (3.8) microg ml(-1) at 90 min [mean (SD), P<0.05]. The cumulative dose increased linearly, with a mean infusion rate of 0.60 (0.16) mg kg(-1) min(-1). The minimum value of the mean arterial pressure during closed-loop administration of propofol was 130 (24) mm Hg, compared with a baseline value of 141 (12) mm Hg. The increase in propofol concentration at constant EEG effect indicates development of acute tolerance to the hypnotic effect of propofol.
Reference-free ground truth metric for metal artifact evaluation in CT images.
Kratz, Bärbel; Ens, Svitlana; Müller, Jan; Buzug, Thorsten M
2011-07-01
In computed tomography (CT), metal objects in the region of interest introduce data inconsistencies during acquisition. Reconstructing these data results in an image with star shaped artifacts induced by the metal inconsistencies. To enhance image quality, the influence of the metal objects can be reduced by different metal artifact reduction (MAR) strategies. For an adequate evaluation of new MAR approaches a ground truth reference data set is needed. In technical evaluations, where phantoms can be measured with and without metal inserts, ground truth data can easily be obtained by a second reference data acquisition. Obviously, this is not possible for clinical data. Here, an alternative evaluation method is presented without the need of an additionally acquired reference data set. The proposed metric is based on an inherent ground truth for metal artifacts as well as MAR methods comparison, where no reference information in terms of a second acquisition is needed. The method is based on the forward projection of a reconstructed image, which is compared to the actually measured projection data. The new evaluation technique is performed on phantom and on clinical CT data with and without MAR. The metric results are then compared with methods using a reference data set as well as an expert-based classification. It is shown that the new approach is an adequate quantification technique for artifact strength in reconstructed metal or MAR CT images. The presented method works solely on the original projection data itself, which yields some advantages compared to distance measures in image domain using two data sets. Beside this, no parameters have to be manually chosen. The new metric is a useful evaluation alternative when no reference data are available.
Groundwater Remediation using Bayesian Information-Gap Decision Theory
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2016-12-01
Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).
Park, Hyunjin; Yang, Jin-ju; Seo, Jongbum; Choi, Yu-yong; Lee, Kun-ho; Lee, Jong-min
2014-04-01
Cortical features derived from magnetic resonance imaging (MRI) provide important information to account for human intelligence. Cortical thickness, surface area, sulcal depth, and mean curvature were considered to explain human intelligence. One region of interest (ROI) of a cortical structure consisting of thousands of vertices contained thousands of measurements, and typically, one mean value (first order moment), was used to represent a chosen ROI, which led to a potentially significant loss of information. We proposed a technological improvement to account for human intelligence in which a second moment (variance) in addition to the mean value was adopted to represent a chosen ROI, so that the loss of information would be less severe. Two computed moments for the chosen ROIs were analyzed with partial least squares regression (PLSR). Cortical features for 78 adults were measured and analyzed in conjunction with the full-scale intelligence quotient (FSIQ). Our results showed that 45% of the variance of the FSIQ could be explained using the combination of four cortical features using two moments per chosen ROI. Our results showed improvement over using a mean value for each ROI, which explained 37% of the variance of FSIQ using the same set of cortical measurements. Our results suggest that using additional second order moments is potentially better than using mean values of chosen ROIs for regression analysis to account for human intelligence. Copyright © 2014 Elsevier Ltd. All rights reserved.
Speaker verification system using acoustic data and non-acoustic data
Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA
2006-03-21
A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.
User-Driven Quality Certification of Workplace Software, the UsersAward Experience
2004-06-01
the set of criteria and the chosen level of approval was sufficiently balanced . Furthermore, the fact that both software providers experienced... Worklife - Building Social Capacity - European Approaches, Edition sigma Berlin. Lind, T. (2002). IT-kartan, användare och IT-system i svenskt
2007-11-01
Engineer- ing Research Laboratory is currently developing a set of facility ‘architec- tural’ programming tools , called Facility ComposerTM (FC). FC...requirements in the early phases of project development. As the facility program, crite- ria, and requirements are chosen, these tools populate the IFC...developing a set of facility “ar- chitectural” programming tools , called Facility Composer (FC), to support the capture and tracking of facility criteria
Data Mining of Extremely Large Ad-Hoc Data Sets to Produce Reverse Web-Link Graphs
2017-03-01
in most of the MR cases. From these studies , we also learned that computing -optimized instances should be chosen for serialized/compressed input data...maximum 200 words) Data mining can be a valuable tool, particularly in the acquisition of military intelligence. As the second study within a larger Naval...open web crawler data set Common Crawl. Similar to previous studies , this research employs MapReduce (MR) for sorting and categorizing output value
Halaas, Gwen Wagstrom; Zink, Therese; Finstad, Deborah; Bolin, Keli; Center, Bruce
2008-01-01
Founded in 1971 with state funding to increase the number of primary care physicians in rural Minnesota, the Rural Physician Associate Program (RPAP) has graduated 1,175 students. Third-year medical students are assigned to primary care physicians in rural communities for 9 months where they experience the realities of rural practice with hands-on participation, mentoring, and one-to-one teaching. Students complete an online curriculum, participate in online discussion with fellow students, and meet face-to-face with RPAP faculty 6 times during the 9-month rotation. Projects designed to bring value to the community, including an evidence-based practice and community health assessment, are completed. To examine RPAP outcomes in recruiting and retaining rural primary care physicians. The RPAP database, including moves and current practice settings, was examined using descriptive statistics. On average, 82% of RPAP graduates have chosen primary care, and 68% family medicine. Of those currently in practice, 44% have practiced in a rural setting all of the time, 42% in a metropolitan setting and 14% have chosen both, with more than 50% of their time in rural practice. Rural origin has only a small association with choosing rural practice. RPAP data suggest that the 9-month longitudinal experience in a rural community increases the number of students choosing primary care practice, especially family medicine, in a rural setting.
Relativistic elliptic matrix tops and finite Fourier transformations
NASA Astrophysics Data System (ADS)
Zotov, A.
2017-10-01
We consider a family of classical elliptic integrable systems including (relativistic) tops and their matrix extensions of different types. These models can be obtained from the “off-shell” Lax pairs, which do not satisfy the Lax equations in general case but become true Lax pairs under various conditions (reductions). At the level of the off-shell Lax matrix, there is a natural symmetry between the spectral parameter z and relativistic parameter η. It is generated by the finite Fourier transformation, which we describe in detail. The symmetry allows one to consider z and η on an equal footing. Depending on the type of integrable reduction, any of the parameters can be chosen to be the spectral one. Then another one is the relativistic deformation parameter. As a by-product, we describe the model of N2 interacting GL(M) matrix tops and/or M2 interacting GL(N) matrix tops depending on a choice of the spectral parameter.
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Choosing Sensor Configuration for a Flexible Structure Using Full Control Synthesis
NASA Technical Reports Server (NTRS)
Lind, Rick; Nalbantoglu, Volkan; Balas, Gary
1997-01-01
Optimal locations and types for feedback sensors which meet design constraints and control requirements are difficult to determine. This paper introduces an approach to choosing a sensor configuration based on Full Control synthesis. A globally optimal Full Control compensator is computed for each member of a set of sensor configurations which are feasible for the plant. The sensor configuration associated with the Full Control system achieving the best closed-loop performance is chosen for feedback measurements to an output feedback controller. A flexible structure is used as an example to demonstrate this procedure. Experimental results show sensor configurations chosen to optimize the Full Control performance are effective for output feedback controllers.
A model for helicopter guidance on spiral trajectories
NASA Technical Reports Server (NTRS)
Mendenhall, S.; Slater, G. L.
1980-01-01
A point mass model is developed for helicopter guidance on spiral trajectories. A fully coupled set of state equations is developed and perturbation equations suitable for 3-D and 4-D guidance are derived and shown to be amenable to conventional state variable feedback methods. Control variables are chosen to be the magnitude and orientation of the net rotor thrust. Using these variables reference controls for nonlevel accelerating trajectories are easily determined. The effects of constant wind are shown to require significant feedforward correction to some of the reference controls and to the time. Although not easily measured themselves, the controls variables chosen are shown to be easily related to the physical variables available in the cockpit.
A review of reaction rates in high temperature air
NASA Technical Reports Server (NTRS)
Park, Chul
1989-01-01
The existing experimental data on the rate coefficients for the chemical reactions in nonequilibrium high temperature air are reviewed and collated, and a selected set of such values is recommended for use in hypersonic flow calculations. For the reactions of neutral species, the recommended values are chosen from the experimental data that existed mostly prior to 1970, and are slightly different from those used previously. For the reactions involving ions, the recommended rate coefficients are newly chosen from the experimental data obtained more recently. The reacting environment is assumed to lack thermal equilibrium, and the rate coefficients are expressed as a function of the controlling temperature, incorporating the recent multitemperature reaction concept.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Emergent Universe with Particle Production
NASA Astrophysics Data System (ADS)
Gangopadhyay, Sunandan; Saha, Anirban; Mukherjee, S.
2016-10-01
The possibility of an emergent universe solution to Einstein's field equations allowing for an irreversible creation of matter at the expense of the gravitational field is shown. With the universe being chosen as spatially flat FRW spacetime together with equation of state proposed in Mukherjee et al. (Class. Quant. Grav. 23, 6927, 2006), the solution exists when the ratio of the phenomenological matter creation rate to the number density times the Hubble parameter is a number β of the order of unity and independent of time. The thermodynamic behaviour is also determined for this solution. Interestingly, we also find that an emergent universe scenario is present with usual equation of state in cosmology when the matter creation rate is chosen to be a constant. More general class of emergent universe solutions are also discussed.
L-hop percolation on networks with arbitrary degree distributions and its applications
NASA Astrophysics Data System (ADS)
Shang, Yilun; Luo, Weiliang; Xu, Shouhuai
2011-09-01
Site percolation has been used to help understand analytically the robustness of complex networks in the presence of random node deletion (or failure). In this paper we move a further step beyond random node deletion by considering that a node can be deleted because it is chosen or because it is within some L-hop distance of a chosen node. Using the generating functions approach, we present analytic results on the percolation threshold as well as the mean size, and size distribution, of nongiant components of complex networks under such operations. The introduction of parameter L is both conceptually interesting because it accommodates a sort of nonindependent node deletion, which is often difficult to tackle analytically, and practically interesting because it offers useful insights for cybersecurity (such as botnet defense).
Neutron counter based on beryllium activation
NASA Astrophysics Data System (ADS)
Bienkowska, B.; Prokopowicz, R.; Scholz, M.; Kaczmarczyk, J.; Igielski, A.; Karpinski, L.; Paducha, M.; Pytel, K.
2014-08-01
The fusion reaction occurring in DD plasma is followed by emission of 2.45 MeV neutrons, which carry out information about fusion reaction rate and plasma parameters and properties as well. Neutron activation of beryllium has been chosen for detection of DD fusion neutrons. The cross-section for reaction 9Be(n, α)6He has a useful threshold near 1 MeV, which means that undesirable multiple-scattered neutrons do not undergo that reaction and therefore are not recorded. The product of the reaction, 6He, decays with half-life T1/2 = 0.807 s emitting β- particles which are easy to detect. Large area gas sealed proportional detector has been chosen as a counter of β-particles leaving activated beryllium plate. The plate with optimized dimensions adjoins the proportional counter entrance window. Such set-up is also equipped with appropriate electronic components and forms beryllium neutron activation counter. The neutron flux density on beryllium plate can be determined from the number of counts. The proper calibration procedure needs to be performed, therefore, to establish such relation. The measurements with the use of known β-source have been done. In order to determine the detector response function such experiment have been modeled by means of MCNP5-the Monte Carlo transport code. It allowed proper application of the results of transport calculations of β- particles emitted from radioactive 6He and reaching proportional detector active volume. In order to test the counter system and measuring procedure a number of experiments have been performed on PF devices. The experimental conditions have been simulated by means of MCNP5. The correctness of simulation outcome have been proved by measurements with known radioactive neutron source. The results of the DD fusion neutron measurements have been compared with other neutron diagnostics.
NASA Technical Reports Server (NTRS)
Niederhaus, Charles E.; Miller, Fletcher J.
2008-01-01
The missions envisioned under the Vision for Space Exploration will require development of new methods to handle crew medical care. Medications and intravenous (IV) fluids have been identified as one area needing development. Storing certain medications and solutions as powders or concentrates can both increase the shelf life and reduce the overall mass and volume of medical supplies. The powders or concentrates would then be mixed in an IV bag with Sterile Water for Injection produced in situ from the potable water supply. Fluid handling in microgravity is different than terrestrial settings, and requires special consideration in the design of equipment. This document describes the analyses and down-select activities used to identify the IV mixing method to be developed that is suitable for ISS and exploration missions. The chosen method is compatible with both normal gravity and microgravity, maintains sterility of the solution, and has low mass and power requirements. The method will undergo further development, including reduced gravity aircraft experiments and computations, in order to fully develop the mixing method and associated operational parameters.
Epidemic spreading through direct and indirect interactions.
Ganguly, Niloy; Krueger, Tyll; Mukherjee, Animesh; Saha, Sudipta
2014-09-01
In this paper we study the susceptible-infected-susceptible epidemic dynamics, considering a specialized setting where popular places (termed passive entities) are visited by agents (termed active entities). We consider two types of spreading dynamics: direct spreading, where the active entities infect each other while visiting the passive entities, and indirect spreading, where the passive entities act as carriers and the infection is spread via them. We investigate in particular the effect of selection strategy, i.e., the way passive entities are chosen, in the spread of epidemics. We introduce a mathematical framework to study the effect of an arbitrary selection strategy and derive formulas for prevalence, extinction probabilities, and epidemic thresholds for both indirect and direct spreading. We also obtain a very simple relationship between the extinction probability and the prevalence. We pay special attention to preferential selection and derive exact formulas. The analysis reveals that an increase in the diversity in the selection process lowers the epidemic thresholds. Comparing the direct and indirect spreading, we identify regions in the parameter space where the prevalence of the indirect spreading is higher than the direct one.
Virtual hybrid test control of sinuous crack
NASA Astrophysics Data System (ADS)
Jailin, Clément; Carpiuc, Andreea; Kazymyrenko, Kyrylo; Poncelet, Martin; Leclerc, Hugo; Hild, François; Roux, Stéphane
2017-05-01
The present study aims at proposing a new generation of experimental protocol for analysing crack propagation in quasi brittle materials. The boundary conditions are controlled in real-time to conform to a predefined crack path. Servo-control is achieved through a full-field measurement technique to determine the pre-set fracture path and a simple predictor model based on linear elastic fracture mechanics to prescribe the boundary conditions on the fly so that the actual crack path follows at best the predefined trajectory. The final goal is to identify, for instance, non-local damage models involving internal lengths. The validation of this novel procedure is performed via a virtual test-case based on an enriched damage model with an internal length scale, a prior chosen sinusoidal crack path and a concrete sample. Notwithstanding the fact that the predictor model selected for monitoring the test is a highly simplified picture of the targeted constitutive law, the proposed protocol exhibits a much improved sensitivity to the sought parameters such as internal lengths as assessed from the comparison with other available experimental tests.
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
Errors in reporting on dissolution research: methodological and statistical implications.
Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria
2017-02-01
In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Intra-jet shocks in two counter-streaming, weakly collisional plasma jets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryutov, D. D.; Kugland, N. L.; Park, H.-S.
2012-07-15
Counterstreaming laser-generated plasma jets can serve as a test-bed for the studies of a variety of astrophysical phenomena, including collisionless shock waves. In the latter problem, the jet's parameters have to be chosen in such a way as to make the collisions between the particles of one jet with the particles of the other jet very rare. This can be achieved by making the jet velocities high and the Coulomb cross-sections correspondingly low. On the other hand, the intra-jet collisions for high-Mach-number jets can still be very frequent, as they are determined by the much lower thermal velocities of themore » particles of each jet. This paper describes some peculiar properties of intra-jet hydrodynamics in such a setting: the steepening of smooth perturbations and shock formation affected by the presence of the 'stiff' opposite flow; the role of a rapid electron heating in shock formation; ion heating by the intrajet shock. The latter effect can cause rapid ion heating which is consistent with recent counterstreaming jet experiments by Ross et al.[Phys. Plasmas 19, 056501 (2012)].« less
Slow and fast solar wind - data selection and statistical analysis
NASA Astrophysics Data System (ADS)
Wawrzaszek, Anna; Macek, Wiesław M.; Bruno, Roberto; Echim, Marius
2014-05-01
In this work we consider the important problem of selection of slow and fast solar wind data measured in-situ by the Ulysses spacecraft during two solar minima (1995-1997, 2007-2008) and solar maximum (1999-2001). To recognise different types of solar wind we use a set of following parameters: radial velocity, proton density, proton temperature, the distribution of charge states of oxygen ions, and compressibility of magnetic field. We present how this idea of the data selection works on Ulysses data. In the next step we consider the chosen intervals for fast and slow solar wind and perform statistical analysis of the fluctuating magnetic field components. In particular, we check the possibility of identification of inertial range by considering the scale dependence of the third and fourth orders scaling exponents of structure function. We try to verify the size of inertial range depending on the heliographic latitudes, heliocentric distance and phase of the solar cycle. Research supported by the European Community's Seventh Framework Programme (FP7/2007 - 2013) under grant agreement no 313038/STORM.
NASA Astrophysics Data System (ADS)
Zhang, Xiayang; Zhu, Ming; Zhao, Meijuan; Wu, Zhe
2018-05-01
Based on a typical wing-rotor thrust model on the airship, the dynamic influence of the gyroscopic effects from the tip rotor acting on the overall coupled system has been analyzed. Meanwhile, the flexibility at the capsule boundary has been studied, as well. Hamilton's principle is employed to derive the general governing equations and the numerical Rayleigh-Ritz method is finally chosen in actual frequency computations. A new set of shape functions are put forward and verified which take most of the couplings among dimensions into account. The parameter studies are also conducted to make deep investigations. The results demonstrate that the inherent frequencies are significantly affected by the rotor speed and the flexible capsule condition. When rotor revolves, the modal shapes have reached into complex states and the components of each mode will change with the increment of rotor speed. The flexibility will also greatly reduce the entire frequencies compared with the rigid case. It is also demonstrated that the inherent property will be significantly affected by the mounting geometry, rotor inertia, the structural stiffness, and rotor speed.
Monaci, Linda; Brohée, Marcel; Tregoat, Virginie; van Hengel, Arjon
2011-07-15
Milk allergens are common allergens occurring in foods, therefore raising concern in allergic consumers. Enzyme-linked immunosorbent assay (ELISA) is, to date, the method of choice for the detection of food allergens by the food industry although, the performance of ELISA might be compromised when severe food processing techniques are applied to allergen-containing foods. In this paper we investigated the influence of baking time on the detection of milk allergens by using commercial ELISA kits. Baked cookies were chosen as a model food system and experiments were set up to study the impact of spiking a matrix food either before, or after the baking process. Results revealed clear analytical differences between both spiking methods, which stress the importance of choosing appropriate spiking methodologies for method validation purposes. Finally, since the narrow dynamic range of quantification of ELISA implies that dilution of samples is required, the impact of sample dilution on the quantitative results was investigated. All parameters investigated were shown to impact milk allergen detection by means of ELISA. Copyright © 2011 Elsevier Ltd. All rights reserved.
Active Learning for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael C.; Wang, Esther
2009-01-01
Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires k(sup d) simulation trials (translating into k(sup d) CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.
Isola, A A; Schmitt, H; van Stevendaal, U; Begemann, P G; Coulon, P; Boussel, L; Grass, M
2011-09-21
Large area detector computed tomography systems with fast rotating gantries enable volumetric dynamic cardiac perfusion studies. Prospectively, ECG-triggered acquisitions limit the data acquisition to a predefined cardiac phase and thereby reduce x-ray dose and limit motion artefacts. Even in the case of highly accurate prospective triggering and stable heart rate, spatial misalignment of the cardiac volumes acquired and reconstructed per cardiac cycle may occur due to small motion pattern variations from cycle to cycle. These misalignments reduce the accuracy of the quantitative analysis of myocardial perfusion parameters on a per voxel basis. An image-based solution to this problem is elastic 3D image registration of dynamic volume sequences with variable contrast, as it is introduced in this contribution. After circular cone-beam CT reconstruction of cardiac volumes covering large areas of the myocardial tissue, the complete series is aligned with respect to a chosen reference volume. The results of the registration process and the perfusion analysis with and without registration are evaluated quantitatively in this paper. The spatial alignment leads to improved quantification of myocardial perfusion for three different pig data sets.
Kasesaz, Y; Khalafi, H; Rahmani, F
2013-12-01
Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Epidemic spreading through direct and indirect interactions
NASA Astrophysics Data System (ADS)
Ganguly, Niloy; Krueger, Tyll; Mukherjee, Animesh; Saha, Sudipta
2014-09-01
In this paper we study the susceptible-infected-susceptible epidemic dynamics, considering a specialized setting where popular places (termed passive entities) are visited by agents (termed active entities). We consider two types of spreading dynamics: direct spreading, where the active entities infect each other while visiting the passive entities, and indirect spreading, where the passive entities act as carriers and the infection is spread via them. We investigate in particular the effect of selection strategy, i.e., the way passive entities are chosen, in the spread of epidemics. We introduce a mathematical framework to study the effect of an arbitrary selection strategy and derive formulas for prevalence, extinction probabilities, and epidemic thresholds for both indirect and direct spreading. We also obtain a very simple relationship between the extinction probability and the prevalence. We pay special attention to preferential selection and derive exact formulas. The analysis reveals that an increase in the diversity in the selection process lowers the epidemic thresholds. Comparing the direct and indirect spreading, we identify regions in the parameter space where the prevalence of the indirect spreading is higher than the direct one.
NASA Astrophysics Data System (ADS)
Lanz, Thierry; Hubeny, Ivan
2003-07-01
We have constructed a comprehensive grid of 680 metal line-blanketed, non-LTE, plane-parallel, hydrostatic model atmospheres for the basic parameters appropriate to O-type stars. The OSTAR2002 grid considers 12 values of effective temperatures, 27,500K<=Teff<=55,000 K with 2500 K steps, eight surface gravities, 3.0<=logg<=4.75 with 0.25 dex steps, and 10 chemical compositions, from metal-rich relative to the Sun to metal-free. The lower limit of logg for a given effective temperature is set by an approximate location of the Eddington limit. The selected chemical compositions have been chosen to cover a number of typical environments of massive stars: the Galactic center, the Magellanic Clouds, blue compact dwarf galaxies like I Zw 18, and galaxies at high redshifts. The paper contains a description of the OSTAR2002 grid and some illustrative examples and comparisons. The complete OSTAR2002 grid is available at our Web site at ApJS, 146, 417 [2003]. Laboratory for Astronomy and Solar Physics, NASA Goddard Space Flight Center, Code 681, Greenbelt, MD 20771.
Sun, Jiangkun; Wu, Yulie; Xi, Xiang; Zhang, Yongmeng; Wu, Xuezhong
2017-01-01
The cylindrical resonator gyroscope (CRG) is a typical Coriolis vibratory gyroscope whose performance is mostly influenced by the damping characteristic of the cylindrical resonator. However, the tremendous damping influences caused by pasting piezoelectric electrodes on the gyroscope, which degrades the performance to a large extent, have rarely been studied. In this paper, the dynamical model is established to analyze various forms of energy consumption. In addition, a FE COMSOL model is also created to discuss the damping influences of several significant parameters of the adhesive layer and piezoelectric electrodes, respectively, and then explicit influence laws are obtained. Simulation results demonstrate that the adhesive layer has some impact on the damping characteristic, but it not significant. The Q factor decreases about 30.31% in total as a result of pasting piezoelectric electrodes. What is more, it is discovered that piezoelectric electrodes with short length, locations away from the outside edges, proper width and well-chosen thickness are able to reduce the damping influences to a large extent. Afterwards, experiments of testing the Q factor are set up to validate the simulation values. PMID:28471376
Frame error rate for single-hop and dual-hop transmissions in 802.15.4 LoWPANs
NASA Astrophysics Data System (ADS)
Biswas, Sankalita; Ghosh, Biswajit; Chandra, Aniruddha; Dhar Roy, Sanjay
2017-08-01
IEEE 802.15.4 is a popular standard for personal area networks used in different low-rate short-range applications. This paper examines the error rate performance of 802.15.4 in fading wireless channel. An analytical model is formulated for evaluating frame error rate (FER); first, for direct single-hop transmission between two sensor nodes, and second, for dual-hop (DH) transmission using an in-between relay node. During modeling the transceiver design parameters are chosen according to the specifications set for both the 2.45 GHz and 868/915 MHz bands. We have also developed a simulation test bed for evaluating FER. Some results showed expected trends, such as FER is higher for larger payloads. Other observations are not that intuitive. It is interesting to note that the error rates are significantly higher for the DH case and demands a signal-to-noise ratio (SNR) penalty of about 7 dB. Also, the FER shoots from zero to one within a very small range of SNR.
Design and performance of the KSC Biomass Production Chamber
NASA Technical Reports Server (NTRS)
Prince, Ralph P.; Knott, William M.; Sager, John C.; Hilding, Suzanne E.
1987-01-01
NASA's Controlled Ecological Life Support System program has instituted the Kennedy Space Center 'breadboard' project of which the Biomass Production Chamber (BPC) presently discussed is a part. The BPC is based on a modified hypobaric test vessel; its design parameters and operational parameters have been chosen in order to meet a wide range of plant-growing objectives aboard future spacecraft on long-duration missions. A control and data acquisition subsystem is used to maintain a common link between the heating, ventilation, and air conditioning system, the illumination system, the gas-circulation system, and the nutrient delivery and monitoring subsystems.
An acoustic sensitivity study of general aviation propellers
NASA Technical Reports Server (NTRS)
Korkan, K. D.; Gregorek, G. M.; Keiter, I.
1980-01-01
This paper describes the results of a study in which a systematic approach has been taken in studying the effect of selected propeller parameters on the character and magnitude of propeller noise. Four general aviation aircraft were chosen, i.e., a Cessna 172, Cessna 210, Cessna 441, and a 19 passenger commuter concept, to provide a range in flight velocity, engine horsepower, and gross weight. The propeller parameters selected for examination consisted of number of blades, rpm reduction, thickness/chord reduction, activity factor reduction, proplets, airfoil improvement, sweep, position of maximum blade loading, and diameter reduction.
TEM study of the FSW nugget in AA2195-T81
NASA Technical Reports Server (NTRS)
Schneider, J. A.; Nunes, A. C., Jr.; Chen, P. S.; Steele, G.
2004-01-01
During fiiction stir welding (FSW) the material being joined is subjected to a thermal- mechanical process in which the temperature, strain and strain rates are not completely understood. To produce a defect fiee weld, process parameters for the weld and tool pin design must be chosen carefully. The ability to select the weld parameters based on the thermal processing requirements of the material, would allow optimization of mechanical properties in the weld region. In this study, an attempt is made to correlate the microstructure with the variation in thermal history the material experiences during the FSW process.
Abd-Elrasheed, Eman; Nageeb El-Helaly, Sara; El-Ashmoony, Manal M; Salah, Salwa
2018-05-01
Intranasal zaleplon solid dispersion was formulated to enhance the solubility, bioavailability and deliver an effective therapy. Zaleplon belongs to Class II drugs, and undergoes extensive first-pass metabolism after oral absorption exhibiting 30% bioavailability. A 2 3 full-factorial design was chosen for the investigation of solid dispersion formulations. The effects of different variables include drug to carrier ratio (1:1 and 1:2), carrier type (polyethylene glycol 4000 and poloxamer 407), and preparation method (solvent evaporation and freeze drying) on different dissolution parameters were studied. The dependent variables determined from the in vitro characterization and their constraints were set as follows: minimum mean dissolution time, maximum dissolution efficiency and maximum percentage release. Numerical optimization was performed according to the constraints set based on the utilization of desirability functions. Differential scanning calorimetry, infrared spectroscopy, X-ray diffraction and scanning electron microscopy were performed. Ex vivo estimation of nasal cytotoxicity and assessment of the γ-aminobutyric acid level in plasma and brain 1 h after nasal SD administration in rabbits compared to the oral market product were conducted. The selected ZP-SD, with a desirability 0.9, composed of poloxamer 407 at drug to carrier ratio 1:2 successfully enhanced the bioavailability showing 44% increase in GABA concentration than the marketed tablets.
Accurate abundance determinations in S stars
NASA Astrophysics Data System (ADS)
Neyskens, P.; Van Eck, S.; Plez, B.; Goriely, S.; Siess, L.; Jorissen, A.
2011-12-01
S-type stars are thought to be the first objects, during their evolution on the asymptotic giant branch (AGB), to experience s-process nucleosynthesis and third dredge-ups, and therefore to exhibit s-process signatures in their atmospheres. Until present, the modeling of these processes is subject to large uncertainties. Precise abundance determinations in S stars are of extreme importance for constraining e.g., the depth and the formation of the 13C pocket. In this paper a large grid of MARCS model atmospheres for S stars is used to derive precise abundances of key s-process elements and iron. A first estimation of the atmospheric parameters is obtained using a set of well-chosen photometric and spectroscopic indices for selecting the best model atmosphere of each S star. Abundances are derived from spectral line synthesis, using the selected model atmosphere. Special interest is paid to technetium, an element without stable isotopes. Its detection in stars is considered as the best possible signature that the star effectively populates the thermally-pulsing AGB (TP-AGB) phase of evolution. The derived Tc/Zr abundances are compared, as a function of the derived [Zr/Fe] overabundances, with AGB stellar model predictions. The computed [Zr/Fe] overabundances are in good agreement with the AGB stellar evolution model predictions, while the Tc/Zr abundances are slightly over-predicted. This discrepancy can help to set stronger constraints on nucleosynthesis and mixing mechanisms in AGB stars.
50 CFR 216.155 - Requirements for monitoring and reporting.
Code of Federal Regulations, 2010 CFR
2010-10-01
... place 3 autonomous digital video cameras overlooking chosen haul-out sites located varying distances from the missile launch site. Each video camera will be set to record a focal subgroup within the... presence and activity will be conducted and recorded in a field logbook or recorded on digital video for...
ERIC Educational Resources Information Center
Jones, Trevelyn; Toth, Luann; Charnizon, Marlene; Grabarek, Daryl; Fleishhacker, Joy
2010-01-01
This article presents the 62 books chosen by "School Library Journal's" editors as the best of the year. While novels include some historical settings and contemporary concerns, it is fantasy that continues to reign supreme. More original, and more creative than ever, it includes selections that are frightening, edgy, wildly funny, electrifying,…
Finite Topological Spaces as a Pedagogical Tool
ERIC Educational Resources Information Center
Helmstutler, Randall D.; Higginbottom, Ryan S.
2012-01-01
We propose the use of finite topological spaces as examples in a point-set topology class especially suited to help students transition into abstract mathematics. We describe how carefully chosen examples involving finite spaces may be used to reinforce concepts, highlight pathologies, and develop students' non-Euclidean intuition. We end with a…
Testing Different Model Building Procedures Using Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…
An Expert Vision System for Autonomous Land Vehicle Road Following.
1988-01-01
TR-138, Center for Automa- tioii Hesearch, University of Maryland, July 1985. ’Miinskyl Minsky , Marvin , "A Framework for Representing Knowledge", in...relationships, frames have been chosen to model objects , Minsky ]. A frame is a data structure containing a set of slots (or attributes) which en- capsulate
Indicators are commonly used for evaluating relative sustainability for competing products and processes. When a set of indicators is chosen for a particular system of study, it is important to ensure that they are variable independently of each other. Often the number of indicat...
Improving English Speaking Fluency: The Role of Six Factors
ERIC Educational Resources Information Center
Shahini, Gholamhossein; Shahamirian, Fatemeh
2017-01-01
This qualitative study, using an open interview, set out to investigate the roles six factors, including age, university education, teachers of English Language institutes, teaching English, dictionary, and note-taking, played in improving English speaking fluency of seventeen fluent Iranian EFL speakers. The participants were chosen purposefully…
Endophenotypes for Intelligence in Children and Adolescents
ERIC Educational Resources Information Center
van Leeuwen, Marieke; van den Berg, Stephanie M.; Hoekstra, Rosa A.; Boomsma, Dorret I.
2007-01-01
The aim of this study was to identify promising endophenotypes for intelligence in children and adolescents for future genetic studies in cognitive development. Based on the available set of endophenotypes for intelligence in adults, cognitive tasks were chosen covering the domains of working memory, processing speed, and selective attention. This…
Fort, J C
1988-01-01
We present an application of the Kohonen algorithm to the traveling salesman problem: Using only this algorithm, without energy function nor any parameter chosen "ad hoc", we found good suboptimal tours. We give a neural model version of this algorithm, closer to classical neural networks. This is illustrated with various numerical examples.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Ganguly, R.
With the current technological growth in the field of device fabrication, white power-LED's are available for solid state lighting applications. This is a paradigm shift from electrical lighting to electronic lighting. The implemented systems are showing some promise by saving a considerable amount of energy as well as providing a good and acceptable illumination level. However, the `useful life' of such devices is an important parameter. If the proper device is not chosen, the desired reliability and performance will not be obtained. In the present work, different parameters associated with reliability of such LED's are studied. Four different varieties of LED's are put to test the `useful life' as per IESNA LM 79 standard. From the results obtained, the proper LED is chosen for further application. Subsequently, lighting design is done for a hospital waiting room (indoor application) with 24 × 7 lighting requirements for replacement of existing CFLs there. The calculations show that although the initial cost is higher for LED based lighting, yet the savings on energy and replacement of the lamp results in a payback time of less than a year.
NASA Astrophysics Data System (ADS)
Jones, A. G.; Afonso, J. C.
2015-12-01
The Earth comprises a single physio-chemical system that we interrogate from its surface and/or from space making observations related to various physical and chemical parameters. A change in one of those parameters affects many of the others; for example a change in velocity is almost always indicative of a concomitant change in density, which results in changes to elevation, gravity and geoid observations. Similarly, a change in oxide chemistry affects almost all physical parameters to a greater or lesser extent. We have now developed sophisticated tools to model/invert data in our individual disciplines to such an extent that we are obtaining high resolution, robust models from our datasets. However, in the vast majority of cases the different datasets are modelled/inverted independently of each other, and often even without considering other data in a qualitative sense. The LitMod framework of Afonso and colleagues presents integrated inversion of geoscientific data to yield thermo-chemical models that are petrologically consistent and constrained. Input data can comprise any combination of elevation, geoid, surface heat flow, seismic surface wave (Rayleigh and Love) data and receiver function data, and MT data. The basis of LitMod is characterization of the upper mantle in terms of five oxides in the CFMAS system and a thermal structure that is conductive to the LAB and convective along the adiabat below the LAB to the 410 km discontinuity. Candidate solutions are chosen from prior distributions of the oxides. For the crust, candidate solutions are chosen from distributions of crustal layering, velocity and density parameters. Those candidate solutions that fit the data within prescribed error limits are kept, and are used to establish broad posterior distributions from which new candidate solutions are chosen. Examples will be shown of application of this approach fitting data from the Kaapvaal Craton in South Africa and the Rae Craton in northern Canada. I will show that the MT data are the most discriminatory, requiring many millions of candidate solutions to be tested in order to sufficiently establish posterior distributions. In particular, the MT data require layered lithosphere, whereas the other data can be fit with a single lithosphere, and the MT data are particularly sensitive to the depth to the LAB.
Avazmohammadi, Reza; Li, David S; Leahy, Thomas; Shih, Elizabeth; Soares, João S; Gorman, Joseph H; Gorman, Robert C; Sacks, Michael S
2018-02-01
Knowledge of the complete three-dimensional (3D) mechanical behavior of soft tissues is essential in understanding their pathophysiology and in developing novel therapies. Despite significant progress made in experimentation and modeling, a complete approach for the full characterization of soft tissue 3D behavior remains elusive. A major challenge is the complex architecture of soft tissues, such as myocardium, which endows them with strongly anisotropic and heterogeneous mechanical properties. Available experimental approaches for quantifying the 3D mechanical behavior of myocardium are limited to preselected planar biaxial and 3D cuboidal shear tests. These approaches fall short in pursuing a model-driven approach that operates over the full kinematic space. To address these limitations, we took the following approach. First, based on a kinematical analysis and using a given strain energy density function (SEDF), we obtained an optimal set of displacement paths based on the full 3D deformation gradient tensor. We then applied this optimal set to obtain novel experimental data from a 1-cm cube of post-infarcted left ventricular myocardium. Next, we developed an inverse finite element (FE) simulation of the experimental configuration embedded in a parameter optimization scheme for estimation of the SEDF parameters. Notable features of this approach include: (i) enhanced determinability and predictive capability of the estimated parameters following an optimal design of experiments, (ii) accurate simulation of the experimental setup and transmural variation of local fiber directions in the FE environment, and (iii) application of all displacement paths to a single specimen to minimize testing time so that tissue viability could be maintained. Our results indicated that, in contrast to the common approach of conducting preselected tests and choosing an SEDF a posteriori, the optimal design of experiments, integrated with a chosen SEDF and full 3D kinematics, leads to a more robust characterization of the mechanical behavior of myocardium and higher predictive capabilities of the SEDF. The methodology proposed and demonstrated herein will ultimately provide a means to reliably predict tissue-level behaviors, thus facilitating organ-level simulations for efficient diagnosis and evaluation of potential treatments. While applied to myocardium, such developments are also applicable to characterization of other types of soft tissues.
NASA Astrophysics Data System (ADS)
Eskandari, M. A.; Mazraeshahi, H. K.; Ramesh, D.; Montazer, E.; Salami, E.; Romli, F. I.
2017-12-01
In this paper, a new method for the determination of optimum parameters of open-cycle liquid-propellant engine of launch vehicles is introduced. The parameters affecting the objective function, which is the ratio of specific impulse to gross mass of the launch vehicle, are chosen to achieve maximum specific impulse as well as minimum mass for the structure of engine, tanks, etc. The proposed algorithm uses constant integration of thrust with respect to time for launch vehicle with specific diameter and length to calculate the optimum working condition. The results by this novel algorithm are compared to those obtained from using Genetic Algorithm method and they are also validated against the results of existing launch vehicle.
Front and pulse solutions for the complex Ginzburg-Landau equation with higher-order terms.
Tian, Huiping; Li, Zhonghao; Tian, Jinping; Zhou, Guosheng
2002-12-01
We investigate one-dimensional complex Ginzburg-Landau equation with higher-order terms and discuss their influences on the multiplicity of solutions. An exact analytic front solution is presented. By stability analysis for the original partial differential equation, we derive its necessary stability condition for amplitude perturbations. This condition together with the exact front solution determine the region of parameter space where the uniformly translating front solution can exist. In addition, stable pulses, chaotic pulses, and attenuation pulses appear generally if the parameters are out of the range. Finally, applying these analysis into the optical transmission system numerically we find that the stable transmission of optical pulses can be achieved if the parameters are appropriately chosen.
Exploring the joint measurability using an information-theoretic approach
NASA Astrophysics Data System (ADS)
Hsu, Li-Yi
2016-12-01
We explore the legal purity parameters for the joint measurements. Instead of direct unsharpening the measurements, we perform the quantum cloning before the sharp measurements. The necessary fuzziness in the unsharp measurements is equivalently introduced in the imperfect cloning process. Based on the information causality and the consequent noisy nonlocal computation, one can derive the information-theoretic quadratic inequalities that must be satisfied by any physical theory. On the other hand, to guarantee the classicality, the linear Bell-type inequalities deduced by these quadratic ones must be obeyed. As for the joint measurability, the purity parameters must be chosen to obey both types of inequalities. Finally, the quadratic inequalities for purity parameters in the joint measurability region are derived.
Ritchie, Andrew M; Lo, Nathan; Ho, Simon Y W
2017-05-01
In Bayesian phylogenetic analyses of genetic data, prior probability distributions need to be specified for the model parameters, including the tree. When Bayesian methods are used for molecular dating, available tree priors include those designed for species-level data, such as the pure-birth and birth-death priors, and coalescent-based priors designed for population-level data. However, molecular dating methods are frequently applied to data sets that include multiple individuals across multiple species. Such data sets violate the assumptions of both the speciation and coalescent-based tree priors, making it unclear which should be chosen and whether this choice can affect the estimation of node times. To investigate this problem, we used a simulation approach to produce data sets with different proportions of within- and between-species sampling under the multispecies coalescent model. These data sets were then analyzed under pure-birth, birth-death, constant-size coalescent, and skyline coalescent tree priors. We also explored the ability of Bayesian model testing to select the best-performing priors. We confirmed the applicability of our results to empirical data sets from cetaceans, phocids, and coregonid whitefish. Estimates of node times were generally robust to the choice of tree prior, but some combinations of tree priors and sampling schemes led to large differences in the age estimates. In particular, the pure-birth tree prior frequently led to inaccurate estimates for data sets containing a mixture of inter- and intraspecific sampling, whereas the birth-death and skyline coalescent priors produced stable results across all scenarios. Model testing provided an adequate means of rejecting inappropriate tree priors. Our results suggest that tree priors do not strongly affect Bayesian molecular dating results in most cases, even when severely misspecified. However, the choice of tree prior can be significant for the accuracy of dating results in the case of data sets with mixed inter- and intraspecies sampling. [Bayesian phylogenetic methods; model testing; molecular dating; node time; tree prior.]. © The authors 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.
Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus
2010-04-15
With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.
MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS.
Austin, Daniel; Dinwoodie, Ian H
We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks.
MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS
AUSTIN, DANIEL; DINWOODIE, IAN H
2014-01-01
We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks. PMID:25620893
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
Effective theories of universal theories
Wells, James D.; Zhang, Zhengkang
2016-01-20
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
Effective theories of universal theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, James D.; Zhang, Zhengkang
It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
Two sets of coupled-mode equations for multiwaveguide systems are derived using a generalized reciprocity relation; one set for a lossless system, and the other for a general lossy or lossless system. The second set of equations also reduces to those of the first set in the lossless case under the condition that the transverse field components are chosen to be real. Analytical relations between the coupling coefficients are shown and applied to the coupling of mode equations. It is shown analytically that these results satisfy exactly both the reciprocity theorem and power conservation. New orthogonal relations between the supermodes are derived in matrix form, with the overlap integrals taken into account.
NASA Astrophysics Data System (ADS)
Suwardi; Setiawan, J.; Susilo, J.
2017-01-01
The first short fuel pin containing natural UO2 pellet in Zry4 cladding has been prepared and planned to be tested in power ramp irradiation. An irradiation test should be designed to allow an experiment can be performed safely and giving maximum results of many performance aspects of design and manufacturing. Performance analysis to the fuel specimen shows that the specimen is not match to be used for power ramp testing. Enlargement by 0.20 mm of pellet diameter has been proposed. The present work is evaluation of modified design for important aspect of isotopic Pu distribution during irradiation test, because generated Pu isotopes in natural UO2 fuel, contribute more power relative to the contribution by enriched UO2 fuel. The axial profile of neutrons flux have been chosen from both experimental measurement and model calculation. The parameters of ramp power has been obtained from statistical experiment data. A simplified and typical base-load commercial PHWR profile of LHR history has been chosen, to determine the minimum irradiation time before ramp test can be performed. The data design and Mat pro XI materials properties models have been chosen. The axial profile of neutrons flux has been accommodated by 5 slices of discrete pin. The Pu distribution of slice-4 with highest power rate has been chosen to be evaluated. The radial discretion of pellet and cladding and numerical parameter have been used the default best practice of TU. The results shows that Pu 239 increased rapidly. The maximum burn up of slice 4 at upper the median slice, it reached nearly 90% of maximum value at about 6000 h with peak of 0.8%a Pu/HM at 22000 h, which is higher than initial U 235. Each 240, 241 and 240 Pu grows slower and ends up to 0.4, 0.2 and 0.18 % respectively. This results can be used for verification of other aspect of fuel behavior in the modeling results and also can be used as guide and comparison to the future post irradiation examination for Pu isotopes distribution.
An English language interface for constrained domains
NASA Technical Reports Server (NTRS)
Page, Brenda J.
1989-01-01
The Multi-Satellite Operations Control Center (MSOCC) Jargon Interpreter (MJI) demonstrates an English language interface for a constrained domain. A constrained domain is defined as one with a small and well delineated set of actions and objects. The set of actions chosen for the MJI is from the domain of MSOCC Applications Executive (MAE) Systems Test and Operations Language (STOL) directives and contains directives for signing a cathode ray tube (CRT) on or off, calling up or clearing a display page, starting or stopping a procedure, and controlling history recording. The set of objects chosen consists of CRTs, display pages, STOL procedures, and history files. Translation from English sentences to STOL directives is done in two phases. In the first phase, an augmented transition net (ATN) parser and dictionary are used for determining grammatically correct parsings of input sentences. In the second phase, grammatically typed sentences are submitted to a forward-chaining rule-based system for interpretation and translation into equivalent MAE STOL directives. Tests of the MJI show that it is able to translate individual clearly stated sentences into the subset of directives selected for the prototype. This approach to an English language interface may be used for similarly constrained situations by modifying the MJI's dictionary and rules to reflect the change of domain.