NASA Technical Reports Server (NTRS)
Watson, Andrew I.; Holle, Ronald L.; Lopez, Raul E.; Nicholson, James R.
1991-01-01
Since 1986, USAF forecasters at NASA-Kennedy have had available a surface wind convergence technique for use during periods of convective development. In Florida during the summer, most of the thunderstorm development is forced by boundary layer processes. The basic premise is that the life cycle of convection is reflected in the surface wind field beneath these storms. Therefore the monitoring of the local surface divergence and/or convergence fields can be used to determine timing, location, longevity, and the lightning hazards which accompany these thunderstorms. This study evaluates four years of monitoring thunderstorm development using surface wind convergence, particularly the average over the area. Cloud-to-ground (CG) lightning is related in time and space with surface convergence for 346 days during the summers of 1987 through 1990 over the expanded wind network at KSC. The relationships are subdivided according to low level wind flow and midlevel moisture patterns. Results show a one in three chance of CG lightning when a convergence event is identified. However, when there is no convergence, the chance of CG lightning is negligible.
João, Thaís Moreira São; Rodrigues, Roberta Cunha Matheus; Gallani, Maria Cecília Bueno Jayme; Miura, Cinthya Tamie Passos; Domingues, Gabriela de Barros Leite; Amireault, Steve; Godin, Gaston
2015-09-01
This study provides evidence of construct validity for the Brazilian version of the Godin-Shephard Leisure-Time Physical Activity Questionnaire (GSLTPAQ), a 1-item instrument used among 236 participants referred for cardiopulmonary exercise testing. The Baecke Habitual Physical Activity Questionnaire (Baecke-HPA) was used to evaluate convergent and divergent validity. The self-reported measure of walking (QCAF) evaluated the convergent validity. Cardiorespiratory fitness assessed convergent validity by the Veterans Specific Activity Questionnaire (VSAQ), peak measured (VO2peak) and maximum predicted (VO2pred) oxygen uptake. Partial adjusted correlation coefficients between the GSLTPAQ, Baecke-HPA, QCAF, VO2pred and VSAQ provided evidence for convergent validity; while divergent validity was supported by the absence of correlations between the GSLTPAQ and the Occupational Physical Activity domain (Baecke-HPA). The GSLTPAQ presents level 3 of evidence of construct validity and may be useful to assess leisure-time physical activity among patients with cardiovascular disease and healthy individuals.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Objective Assessment of Vergence after Treatment of Concussion-Related CI: A Pilot Study
Scheiman, Mitchell; Talasan, Henry; Mitchell, Gladys L; Alvarez, Tara L.
2016-01-01
Purpose To evaluate changes in objective measures of disparity vergence after office-based vision therapy (OBVT) for concussion-related convergence insufficiency (CI), and determine the feasibility of using this objective assessment as an outcome measure in a clinical trial. Methods This was a prospective, observational trial. All participants were treated with weekly OBVT with home reinforcement. Participants included two adolescents and three young adults with concussion-related, symptomatic CI. The primary outcome measure was average peak velocity for 4-degree symmetrical convergence steps. Other objective outcome measures of disparity vergence included time to peak velocity, latency, accuracy, settling time, and main sequence. We also evaluated saccadic eye movements using the same outcome measures. Changes in clinical measures (near point of convergence, positive fusional vergence at near, Convergence Insufficiency Symptom Survey (CISS) score) were evaluated. Results There were statistically significant and clinically meaningful changes in all clinical measures for convergence. Four of the five subjects met clinical success criteria. For the objective measures, we found a statistically significant increase in peak velocity, response accuracy to 4° symmetrical convergence and divergence step stimuli and the main sequence ratio for convergence step stimuli. Objective saccadic eye movements (5° and 10°) appeared normal pre-OBVT, and did not show any significant change after treatment. Conclusions This is the first report of the use of objective measures of disparity vergence as outcome measures for concussion-related convergence insufficiency. These measures provide additional information that is not accessible with clinical tests about underlying physiological mechanisms leading to changes in clinical findings and symptoms. The study results also demonstrate that patients with concussion can tolerate the visual demands (over 200 vergence and versional eye movements) during the 25-minute testing time and suggest that these measures could be used in a large-scale randomized clinical trial of concussion-related CI as outcome measures. PMID:27464574
Objective Assessment of Vergence after Treatment of Concussion-Related CI: A Pilot Study.
Scheiman, Mitchell M; Talasan, Henry; Mitchell, G Lynn; Alvarez, Tara L
2017-01-01
To evaluate changes in objective measures of disparity vergence after office-based vision therapy (OBVT) for concussion-related convergence insufficiency (CI) and determine the feasibility of using this objective assessment as an outcome measure in a clinical trial. This was a prospective, observational trial. All participants were treated with weekly OBVT with home reinforcement. Participants included two adolescents and three young adults with concussion-related, symptomatic CI. The primary outcome measure was average peak velocity for 4° symmetrical convergence steps. Other objective outcome measures of disparity vergence included time to peak velocity, latency, accuracy, settling time, and main sequence. We also evaluated saccadic eye movements using the same outcome measures. Changes in clinical measures (near point of convergence, positive fusional vergence at near, Convergence Insufficiency Symptom Survey [CISS] score) were evaluated. There were statistically significant and clinically meaningful changes in all clinical measures for convergence. Four of the five subjects met clinical success criteria. For the objective measures, we found a statistically significant increase in peak velocity, response accuracy to 4° symmetrical convergence and divergence step stimuli, and the main sequence ratio for convergence step stimuli. Objective saccadic eye movements (5 and 10°) appeared normal pre-OBVT and did not show any significant change after treatment. This is the first report of the use of objective measures of disparity vergence as outcome measures for concussion-related convergence insufficiency. These measures provide additional information that is not accessible with clinical tests about underlying physiological mechanisms leading to changes in clinical findings and symptoms. The study results also demonstrate that patients with concussion can tolerate the visual demands (over 200 vergence and versional eye movements) during the 25-minute testing time and suggest that these measures could be used in a large-scale randomized clinical trial of concussion-related CI as outcome measures.
Cohen, Yuval; Segal, Ori; Barkana, Yaniv; Lederman, Robert; Zadok, David; Pras, Eran; Morad, Yair
2010-01-01
The aim of this study was to evaluate the relationship between asthenopic symptoms, convergence amplitude, reading comprehension, and saccadic eye movements in children 8 to 10 years of age. Sixty-six children age 8 to 10 years were examined. Convergence was evaluated using (1) nonaccommodative target at near and distance, (2) a near computerized stereogram, and (3) measurement of the near point of convergence (NPC). Reading ability was examined by (1) a reading comprehension test in which children had to answer questions regarding a paragraph they read and (2) the Developmental Eye Movement Test (DEM), which evaluates saccadic speed and accuracy. Asthenopic symptoms were scored by an Asthenopic Symptoms Questionnaire. Asthenopic symptoms score was correlated with the near point of convergence (r = -0.4; P = 0.003), convergence on a near stereogram (r = 0.38; P = 0.01) and distant light (r = 0.27; P = 0.04), but not with convergence on a near nonaccommodative target (r = 0.07; P = 0.6). The DEM ratio score was correlated with the asthenopic symptoms score (r = -0.32; P = 0.01), but the reading comprehension test score was not (r = 0.12; P = 0.4). There was correlation, however, between the time for completion of the reading comprehension test and the asthenopic symptoms score (r = 0.39; P = 0.006). Asthenopic symptoms score was correlated with convergence amplitude as measured, whereas accommodation is controlled and the ratio score calculated based upon DEM results. Further study is needed to evaluate the usefulness of the integration between symptom survey and objective reading examinations as screening tool for the diagnosis of convergence insufficiency.
Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing
2013-11-18
Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications.
Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing
2013-01-01
Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Mookerjee, P.; Bar-Shalom, Y.
1983-01-01
Effect of nonlinearity on convergence of the local linear and global linear adaptive controllers is evaluated. A nonlinear helicopter vibration model is selected for the evaluation which has sufficient nonlinearity, including multiple minimum, to assess the vibration reduction capability of the adaptive controllers. The adaptive control algorithms are based upon a linear transfer matrix assumption and the presence of nonlinearity has a significant effect on algorithm behavior. Simulation results are presented which demonstrate the importance of the caution property in the global linear controller. Caution is represented by a time varying rate weighting term in the local linear controller and this improves the algorithm convergence. Nonlinearity in some cases causes Kalman filter divergence. Two forms of the Kalman filter covariance equation are investigated.
Marginal Fit of Metal-Ceramic Copings: Effect of Luting Cements and Tooth Preparation Design.
de Almeida, Juliana Gomes Dos Santos Paes; Guedes, Carlos Gramani; Abi-Rached, Filipe de Oliveira; Trindade, Flávia Zardo; Fonseca, Renata Garcia
2017-12-22
To evaluate the effect of the triad finish line design, axial wall convergence angle, and luting cement on the marginal fit of metal copings used in metal-ceramic crowns. Schematic dies and their respective copings were cast in NiCr alloy. The dies exhibited the following finish line/convergence angle combinations: sloping shoulder/6°, sloping shoulder/20°, shoulder/6°, shoulder/20°. Marginal fit was evaluated under a stereomicroscope, before and after cementation. Copings were air-abraded with 50 μm Al 2 O 3 particles and cemented with Cimento de Zinco, RelyX U100, or Panavia F cements (n = 10/group). Data were square-root transformed and analyzed by 3-way factorial random effect model and Tukey's post hoc test (α = 0.05). Statistical analysis showed significance for the interactions finish line and convergence angle (p < 0.05), convergence angle and time (p < 0.001), and luting cement and time (p < 0.001). Sloping shoulder/20° provided the highest marginal discrepancy when compared to the other finish line/convergence angle combinations, which were statistically similar among each other. For both convergence angles and for all luting cements, the marginal discrepancy was significantly higher after cementation. Before and after cementation, 6° provided better marginal fit than 20°. After cementation, Panavia F provided higher marginal discrepancy than Cimento de Zinco. Lower convergence angle combined with shoulder and a low-consistency luting cement is preferable to cement metal copings. © 2017 by the American College of Prosthodontists.
da Fonseca Neto, João Viana; Abreu, Ivanildo Silva; da Silva, Fábio Nogueira
2010-04-01
Toward the synthesis of state-space controllers, a neural-genetic model based on the linear quadratic regulator design for the eigenstructure assignment of multivariable dynamic systems is presented. The neural-genetic model represents a fusion of a genetic algorithm and a recurrent neural network (RNN) to perform the selection of the weighting matrices and the algebraic Riccati equation solution, respectively. A fourth-order electric circuit model is used to evaluate the convergence of the computational intelligence paradigms and the control design method performance. The genetic search convergence evaluation is performed in terms of the fitness function statistics and the RNN convergence, which is evaluated by landscapes of the energy and norm, as a function of the parameter deviations. The control problem solution is evaluated in the time and frequency domains by the impulse response, singular values, and modal analysis.
Peng, Xiao; Wu, Huaiqin; Song, Ka; Shi, Jiaxin
2017-10-01
This paper is concerned with the global Mittag-Leffler synchronization and the synchronization in finite time for fractional-order neural networks (FNNs) with discontinuous activations and time delays. Firstly, the properties with respect to Mittag-Leffler convergence and convergence in finite time, which play a critical role in the investigation of the global synchronization of FNNs, are developed, respectively. Secondly, the novel state-feedback controller, which includes time delays and discontinuous factors, is designed to realize the synchronization goal. By applying the fractional differential inclusion theory, inequality analysis technique and the proposed convergence properties, the sufficient conditions to achieve the global Mittag-Leffler synchronization and the synchronization in finite time are addressed in terms of linear matrix inequalities (LMIs). In addition, the upper bound of the setting time of the global synchronization in finite time is explicitly evaluated. Finally, two examples are given to demonstrate the validity of the proposed design method and theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
The trajectory of scientific discovery: concept co-occurrence and converging semantic distance.
Cohen, Trevor; Schvaneveldt, Roger W
2010-01-01
The paradigm of literature-based knowledge discovery originated by Swanson involves finding meaningful associations between terms or concepts that have not occurred together in any previously published document. While several automated approaches have been applied to this problem, these generally evaluate the literature at a point in time, and do not evaluate the role of change over time in distributional statistics as an indicator of meaningful implicit associations. To address this issue, we develop and evaluate Symmetric Random Indexing (SRI), a novel variant of the Random Indexing (RI) approach that is able to measure implicit association over time. SRI is found to compare favorably to existing RI variants in the prediction of future direct co-occurrence. Summary statistics over several experiments suggest a trend of converging semantic distance prior to the co-occurrence of key terms for two seminal historical literature-based discoveries.
Li, Shuai; Li, Yangming; Wang, Zheng
2013-03-01
This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Fu; Lou, Yidong; Gu, Shengfeng; Gong, Xiaopeng; Shi, Chuang
2017-10-01
During past decades, precise point positioning (PPP) has been proven to be a well-known positioning technique for centimeter or decimeter level accuracy. However, it needs long convergence time to get high-accuracy positioning, which limits the prospects of PPP, especially in real-time applications. It is expected that the PPP convergence time can be reduced by introducing high-quality external information, such as ionospheric or tropospheric corrections. In this study, several methods for tropospheric wet delays modeling over wide areas are investigated. A new, improved model is developed, applicable in real-time applications in China. Based on the GPT2w model, a modified parameter of zenith wet delay exponential decay wrt. height is introduced in the modeling of the real-time tropospheric delay. The accuracy of this tropospheric model and GPT2w model in different seasons is evaluated with cross-validation, the root mean square of the zenith troposphere delay (ZTD) is 1.2 and 3.6 cm on average, respectively. On the other hand, this new model proves to be better than the tropospheric modeling based on water-vapor scale height; it can accurately express tropospheric delays up to 10 km altitude, which potentially has benefits in many real-time applications. With the high-accuracy ZTD model, the augmented PPP convergence performance for BeiDou navigation satellite system (BDS) and GPS is evaluated. It shows that the contribution of the high-quality ZTD model on PPP convergence performance has relation with the constellation geometry. As BDS constellation geometry is poorer than GPS, the improvement for BDS PPP is more significant than that for GPS PPP. Compared with standard real-time PPP, the convergence time is reduced by 2-7 and 20-50% for the augmented BDS PPP, while GPS PPP only improves about 6 and 18% (on average), in horizontal and vertical directions, respectively. When GPS and BDS are combined, the geometry is greatly improved, which is good enough to get a reliable PPP solution, the augmentation PPP improves insignificantly comparing with standard PPP.
Cyclic flank-vent and central-vent eruption patterns
NASA Astrophysics Data System (ADS)
Takada, Akira
Many basaltic and andesitic polygenetic volcanoes have cyclic eruptive activity that alternates between a phase dominated by flank eruptions and a phase dominated by eruptions from a central vent. This paper proposes the use of time-series diagrams of eruption sites on each polygenetic volcano and intrusion distances of dikes to evaluate volcano growth, to qualitatively reconstruct the stress history within the volcano, and to predict the next eruption site. In these diagrams the position of an eruption site is represented by the distance from the center of the volcano and the clockwise azimuth from north. Time-series diagrams of Mauna Loa, Kilauea, Kliuchevskoi, Etna, Sakurajima, Fuji, Izu-Oshima, and Hekla volcanoes indicate that fissure eruption sites of these volcanoes migrated toward the center of the volcano linearly, radially, or spirally with damped oscillation, occasionally forming a hierarchy in convergence-related features. At Krafla, terminations of dikes also migrated toward the center of the volcano with time. Eruption sites of Piton de la Fournaise did not converge but oscillated around the center. After the convergence of eruption sites with time, the central eruption phase is started. The intrusion sequence of dikes is modeled, applying crack interaction theory. Variation in convergence patterns is governed by the regional stress and the magma supply. Under the condition that a balance between regional extension and magma supply is maintained, the central vent convergence time during the flank eruption phase is 1-10 years, whereas the flank vent recurrence time during the central eruption phase is greater than 100 years owing to an inferred decrease in magma supply. Under the condition that magma supply prevails over regional extension, the central vent convergence time increases, whereas the flank vent recurrence time decreases owing to inferred stress relaxation. Earthquakes of M>=6 near a volcano during the flank eruption phase extend the central vent convergence time. Earthquakes during the central eruption phase promote recurrence of flank eruptions. Asymmetric distribution of eruption sites around the flanks of a volcano can be caused by local stress sources such as an adjacent volcano.
NASA Astrophysics Data System (ADS)
Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.; Michael, Scott; McConnell, Caitlin R.
2013-05-01
We conduct a convergence study of a protoplanetary disk subject to gravitational instabilities (GIs) at a time of approximate balance between heating produced by the GIs and radiative cooling governed by realistic dust opacities. We examine cooling times, characterize GI-driven spiral waves and their resultant gravitational torques, and evaluate how accurately mass transport can be represented by an α-disk formulation. Four simulations, identical except for azimuthal resolution, are conducted with a grid-based three-dimensional hydrodynamics code. There are two regions in which behaviors differ as resolution increases. The inner region, which contains 75% of the disk mass and is optically thick, has long cooling times and is well converged in terms of various measures of structure and mass transport for the three highest resolutions. The longest cooling times coincide with radii where the Toomre Q has its minimum value. Torques are dominated in this region by two- and three-armed spirals. The effective α arising from gravitational stresses is typically a few × 10-3 and is only roughly consistent with local balance of heating and cooling when time-averaged over many dynamic times and a wide range of radii. On the other hand, the outer disk region, which is mostly optically thin, has relatively short cooling times and does not show convergence as resolution increases. Treatment of unstable disks with optical depths near unity with realistic radiative transport is a difficult numerical problem requiring further study. We discuss possible implications of our results for numerical convergence of fragmentation criteria in disk simulations.
Accelerating evaluation of converged lattice thermal conductivity
NASA Astrophysics Data System (ADS)
Qin, Guangzhao; Hu, Ming
2018-01-01
High-throughput computational materials design is an emerging area in materials science, which is based on the fast evaluation of physical-related properties. The lattice thermal conductivity (κ) is a key property of materials for enormous implications. However, the high-throughput evaluation of κ remains a challenge due to the large resources costs and time-consuming procedures. In this paper, we propose a concise strategy to efficiently accelerate the evaluation process of obtaining accurate and converged κ. The strategy is in the framework of phonon Boltzmann transport equation (BTE) coupled with first-principles calculations. Based on the analysis of harmonic interatomic force constants (IFCs), the large enough cutoff radius (rcutoff), a critical parameter involved in calculating the anharmonic IFCs, can be directly determined to get satisfactory results. Moreover, we find a simple way to largely ( 10 times) accelerate the computations by fast reconstructing the anharmonic IFCs in the convergence test of κ with respect to the rcutof, which finally confirms the chosen rcutoff is appropriate. Two-dimensional graphene and phosphorene along with bulk SnSe are presented to validate our approach, and the long-debate divergence problem of thermal conductivity in low-dimensional systems is studied. The quantitative strategy proposed herein can be a good candidate for fast evaluating the reliable κ and thus provides useful tool for high-throughput materials screening and design with targeted thermal transport properties.
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
A pheromone-rate-based analysis on the convergence time of ACO algorithm.
Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng
2009-08-01
Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.
Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.
Forti, Mauro; Nistri, Paolo; Quincampoix, Marc
2006-11-01
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.
An Approach to Speed up Single-Frequency PPP Convergence with Quad-Constellation GNSS and GIM.
Cai, Changsheng; Gong, Yangzhao; Gao, Yang; Kuang, Cuilin
2017-06-06
The single-frequency precise point positioning (PPP) technique has attracted increasing attention due to its high accuracy and low cost. However, a very long convergence time, normally a few hours, is required in order to achieve a positioning accuracy level of a few centimeters. In this study, an approach is proposed to accelerate the single-frequency PPP convergence by combining quad-constellation global navigation satellite system (GNSS) and global ionospheric map (GIM) data. In this proposed approach, the GPS, GLONASS, BeiDou, and Galileo observations are directly used in an uncombined observation model and as a result the ionospheric and hardware delay (IHD) can be estimated together as a single unknown parameter. The IHD values acquired from the GIM product and the multi-GNSS differential code bias (DCB) product are then utilized as pseudo-observables of the IHD parameter in the observation model. A time varying weight scheme has also been proposed for the pseudo-observables to gradually decrease its contribution to the position solutions during the convergence period. To evaluate the proposed approach, datasets from twelve Multi-GNSS Experiment (MGEX) stations on seven consecutive days are processed and analyzed. The numerical results indicate that the single-frequency PPP with quad-constellation GNSS and GIM data are able to reduce the convergence time by 56%, 47%, 41% in the east, north, and up directions compared to the GPS-only single-frequency PPP.
NASA Astrophysics Data System (ADS)
Li, Yongfu; Li, Kezhi; Zheng, Taixiong; Hu, Xiangdong; Feng, Huizong; Li, Yinguo
2016-05-01
This study proposes a feedback-based platoon control protocol for connected autonomous vehicles (CAVs) under different network topologies of initial states. In particularly, algebraic graph theory is used to describe the network topology. Then, the leader-follower approach is used to model the interactions between CAVs. In addition, feedback-based protocol is designed to control the platoon considering the longitudinal and lateral gaps simultaneously as well as different network topologies. The stability and consensus of the vehicular platoon is analyzed using the Lyapunov technique. Effects of different network topologies of initial states on convergence time and robustness of platoon control are investigated. Results from numerical experiments demonstrate the effectiveness of the proposed protocol with respect to the position and velocity consensus in terms of the convergence time and robustness. Also, the findings of this study illustrate the convergence time of the control protocol is associated with the initial states, while the robustness is not affected by the initial states significantly.
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.
1981-01-01
Exact solutions for diffusion in single phase binary alloy systems with constant diffusion coefficient and zero-flux boundary condition have been evaluated to establish the optimum zone size of applicability. Planar, cylindrical and spherical interface geometry, and finite, singly infinite, and doubly infinite systems are treated. Two solutions are presented for each geometry, one well suited to short diffusion times, and one to long times. The effect of zone-size on the convergence of these solutions is discussed. A generalized form of the diffusion solution for doubly infinite systems is proposed.
NASA Astrophysics Data System (ADS)
Perversi, Eleonora; Regazzini, Eugenio
2015-05-01
For a general inelastic Kac-like equation recently proposed, this paper studies the long-time behaviour of its probability-valued solution. In particular, the paper provides necessary and sufficient conditions for the initial datum in order that the corresponding solution converges to equilibrium. The proofs rest on the general CLT for independent summands applied to a suitable Skorokhod representation of the original solution evaluated at an increasing and divergent sequence of times. It turns out that, roughly speaking, the initial datum must belong to the standard domain of attraction of a stable law, while the equilibrium is presentable as a mixture of stable laws.
An Approach to Speed up Single-Frequency PPP Convergence with Quad-Constellation GNSS and GIM
Cai, Changsheng; Gong, Yangzhao; Gao, Yang; Kuang, Cuilin
2017-01-01
The single-frequency precise point positioning (PPP) technique has attracted increasing attention due to its high accuracy and low cost. However, a very long convergence time, normally a few hours, is required in order to achieve a positioning accuracy level of a few centimeters. In this study, an approach is proposed to accelerate the single-frequency PPP convergence by combining quad-constellation global navigation satellite system (GNSS) and global ionospheric map (GIM) data. In this proposed approach, the GPS, GLONASS, BeiDou, and Galileo observations are directly used in an uncombined observation model and as a result the ionospheric and hardware delay (IHD) can be estimated together as a single unknown parameter. The IHD values acquired from the GIM product and the multi-GNSS differential code bias (DCB) product are then utilized as pseudo-observables of the IHD parameter in the observation model. A time varying weight scheme has also been proposed for the pseudo-observables to gradually decrease its contribution to the position solutions during the convergence period. To evaluate the proposed approach, datasets from twelve Multi-GNSS Experiment (MGEX) stations on seven consecutive days are processed and analyzed. The numerical results indicate that the single-frequency PPP with quad-constellation GNSS and GIM data are able to reduce the convergence time by 56%, 47%, 41% in the east, north, and up directions compared to the GPS-only single-frequency PPP. PMID:28587305
Song, Bongyong; Park, Justin C; Song, William Y
2014-11-07
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.
Li, Shuai; Li, Yangming
2013-10-28
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc
2011-02-01
In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.
Galindo-Murillo, Rodrigo; Roe, Daniel R; Cheatham, Thomas E
2015-05-01
The structure and dynamics of DNA are critically related to its function. Molecular dynamics simulations augment experiment by providing detailed information about the atomic motions. However, to date the simulations have not been long enough for convergence of the dynamics and structural properties of DNA. Molecular dynamics simulations performed with AMBER using the ff99SB force field with the parmbsc0 modifications, including ensembles of independent simulations, were compared to long timescale molecular dynamics performed with the specialized Anton MD engine on the B-DNA structure d(GCACGAACGAACGAACGC). To assess convergence, the decay of the average RMSD values over longer and longer time intervals was evaluated in addition to assessing convergence of the dynamics via the Kullback-Leibler divergence of principal component projection histograms. These molecular dynamics simulations-including one of the longest simulations of DNA published to date at ~44μs-surprisingly suggest that the structure and dynamics of the DNA helix, neglecting the terminal base pairs, are essentially fully converged on the ~1-5μs timescale. We can now reproducibly converge the structure and dynamics of B-DNA helices, omitting the terminal base pairs, on the μs time scale with both the AMBER and CHARMM C36 nucleic acid force fields. Results from independent ensembles of simulations starting from different initial conditions, when aggregated, match the results from long timescale simulations on the specialized Anton MD engine. With access to large-scale GPU resources or the specialized MD engine "Anton" it is possible for a variety of molecular systems to reproducibly and reliably converge the conformational ensemble of sampled structures. This article is part of a Special Issue entitled: Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Galindo-Murillo, Rodrigo; Roe, Daniel R.; Cheatham, Thomas E.
2014-01-01
Background The structure and dynamics of DNA are critically related to its function. Molecular dynamics (MD) simulations augment experiment by providing detailed information about the atomic motions. However, to date the simulations have not been long enough for convergence of the dynamics and structural properties of DNA. Methods MD simulations performed with AMBER using the ff99SB force field with the parmbsc0 modifications, including ensembles of independent simulations, were compared to long timescale MD performed with the specialized Anton MD engine on the B-DNA structure d(GCACGAACGAACGAACGC). To assess convergence, the decay of the average RMSD values over longer and longer time intervals was evaluated in addition to assessing convergence of the dynamics via the Kullback-Leibler divergence of principal component projection histograms. Results These MD simulations —including one of the longest simulations of DNA published to date at ~44 μs—surprisingly suggest that the structure and dynamics of the DNA helix, neglecting the terminal base pairs, are essentially fully converged on the ~1–5 μs timescale. Conclusions We can now reproducibly converge the structure and dynamics of B-DNA helices, omitting the terminal base pairs, on the μs time scale with both the AMBER and CHARMM C36 nucleic acid force fields. Results from independent ensembles of simulations starting from different initial conditions, when aggregated, match the results from long timescale simulations on the specialized Anton MD engine. General Significance With access to large-scale GPU resources or the specialized MD engine “Anton” it is possibly for a variety of molecular systems to reproducibly and reliably converge the conformational ensemble of sampled structures. PMID:25219455
NASA Astrophysics Data System (ADS)
Song, Bongyong; Park, Justin C.; Song, William Y.
2014-11-01
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
The convergence of health care financing structures: empirical evidence from OECD-countries.
Leiter, Andrea M; Theurl, Engelbert
2012-02-01
The convergence/divergence of health care systems between countries is an interesting facet of the health care system research from a macroeconomic perspective. In this paper, we concentrate on an important dimension of every health care system, namely the convergence/divergence of health care financing (HCF). Based on data from 22 OECD countries in the time period 1970-2005, we use the public financing ratio (public financing in % of total HCF) and per capita public HCF as indicators for convergence. By applying different concepts of convergence, we find that HCF is converging. This conclusion also holds when we look at smaller subgroups of countries and shorter time periods. However, we find evidence that countries do not move towards a common mean and that the rate of convergence is decreasing over time.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
Dusek, Wolfgang A; Pierscionek, Barbara K; McClelland, Julie F
2011-08-11
The present study investigates two different treatment options for convergence insufficiency CI for a group of children with reading difficulties referred by educational institutes to a specialist eye clinic in Vienna. One hundred and thirty four subjects (aged 7-14 years) with reading difficulties were referred from an educational institute in Vienna, Austria for visual assessment. Each child was given either 8Δ base-in reading spectacles (n=51) or computerised home vision therapy (HTS) (n=51). Thirty two participants refused all treatment offered (clinical control group). A full visual assessment including reading speed and accuracy were conducted pre- and post-treatment. Factorial analyses demonstrated statistically significant changes between results obtained for visits 1 and 2 for total reading time, reading error score, amplitude of accommodation and binocular accommodative facility (within subjects effects) (p<0.05). Significant differences were also demonstrated between treatment groups for total reading time, reading error score and binocular accommodative facility (between subjects effects) (p<0.05). Reading difficulties with no apparent intellectual or psychological foundation may be due to a binocular vision anomaly such as convergence insufficiency. Both the HTS and prismatic correction are highly effective treatment options for convergence insufficiency. Prismatic correction can be considered an effective alternative to HTS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willert, Jeffrey; Taitano, William T.; Knoll, Dana
In this note we demonstrate that using Anderson Acceleration (AA) in place of a standard Picard iteration can not only increase the convergence rate but also make the iteration more robust for two transport applications. We also compare the convergence acceleration provided by AA to that provided by moment-based acceleration methods. Additionally, we demonstrate that those two acceleration methods can be used together in a nested fashion. We begin by describing the AA algorithm. At this point, we will describe two application problems, one from neutronics and one from plasma physics, on which we will apply AA. We provide computationalmore » results which highlight the benefits of using AA, namely that we can compute solutions using fewer function evaluations, larger time-steps, and achieve a more robust iteration.« less
Composite transform-convergent plate boundaries: description and discussion
Ryan, H.F.; Coleman, P.J.
1992-01-01
The leading edge of the overriding plate at an obliquely convergent boundary is commonly sliced by a system of strike-slip faults. This fault system is often structurally complex, and may show correspondingly uneven strain effects, with great vertical and translational shifts of the component blocks of the fault system. The stress pattern and strain effects vary along the length of the system and change through time. These margins are considered to be composite transform-convergent (CTC) plate boundaries. Examples are given of structures formed along three CTC boundaries: the Aleutian Ridge, the Solomon Islands, and the Philippines. The dynamism of the fault system along a CTC boundary can enhance vertical tectonism and basin formation. This concept provides a framework for the evaluation of petroleum resources related to basin formation, and mineral exploration related to igneous activity associated with transtensional processes. ?? 1992.
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
Ryu, Ik Hee; Han, Jinu; Lee, Hyung Keun; Kim, Jin Kook; Han, Sueng-Han
2014-04-01
To evaluate the change of accommodation-convergence parameters after implantation of Artisan phakic intraocular lens (PIOL). Prospective study for the patients with the Artisan PIOL implantation was performed. A total of 37 patients (3 males and 34 females) enrolled the study. Preoperatively, convergence amplitude, the stimulus accommodative convergence per unit of accommodation (AC/A) ratio and the near point of convergence (NPC) were evaluated. After the Artisan PIOL implantation, the identical evaluations were repeated at 1 week, 1, 3, and 6 months after the surgery. Mean age was 24.3 ± 4.8 years old, and preoperative refractive error was -8.92 ± 4.13 diopters (D). After the implantation, mean refractive errors significantly decreased to within ±1.00 D, and noticeable complications were not found. The convergence amplitude and the stimulus AC/A ratio increased 1 month after the surgery, but progressively stabilized afterward to near preoperative values. NPC didn't show any significant change over follow-up period up to 6 months. These results regarding implantation of the Artisan PIOL revealed the increase of accommodation-convergence relationship within first 1 month after the surgery, but progressive stabilization was noted during follow-up periods.
ERIC Educational Resources Information Center
Goldsmith, H. H.; And Others
1991-01-01
Examined convergent and discriminant validity of eight widely used preschooler, toddler, and infant temperament questionnaires. There was surprisingly strong evidence for convergence among scales intended to measure similar concepts, with most convergent validity coefficients falling in the .50s, .60s, and .70s. (SH)
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
A General Method for Solving Systems of Non-Linear Equations
NASA Technical Reports Server (NTRS)
Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)
1995-01-01
The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.
NASA Technical Reports Server (NTRS)
Navon, I. M.; Bloom, S.; Takacs, L. L.
1985-01-01
An attempt was made to use the GLAS global 4th order shallow water equations to perform a Machenhauer nonlinear normal mode initialization (NLNMI) for the external vertical mode. A new algorithm was defined for identifying and filtering out computational modes which affect the convergence of the Machenhauer iterative procedure. The computational modes and zonal waves were linearly initialized and gravitational modes were nonlinearly initialized. The Machenhauer NLNMI was insensitive to the absence of high zonal wave numbers. The effects of the Machenhauer scheme were evaluated by performing 24 hr integrations with nondissipative and dissipative explicit time integration models. The NLNMI was found to be inferior to the Rasch (1984) pseudo-secant technique for obtaining convergence when the time scales of nonlinear forcing were much smaller than the time scales expected from the natural frequency of the mode.
Polynomial complexity despite the fermionic sign
NASA Astrophysics Data System (ADS)
Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.
2017-04-01
It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.
Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.
Levinson, Cheri A; Rodebaugh, Thomas L
2011-09-01
The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.
Repeated functional convergent effects of NaV1.7 on acid insensitivity in hibernating mammals
Liu, Zhen; Wang, Wei; Zhang, Tong-Zuo; Li, Gong-Hua; He, Kai; Huang, Jing-Fei; Jiang, Xue-Long; Murphy, Robert W.; Shi, Peng
2014-01-01
Hibernating mammals need to be insensitive to acid in order to cope with conditions of high CO2; however, the molecular basis of acid tolerance remains largely unknown. The African naked mole-rat (Heterocephalus glaber) and hibernating mammals share similar environments and physiological features. In the naked mole-rat, acid insensitivity has been shown to be conferred by the functional motif of the sodium ion channel NaV1.7. There is now an opportunity to evaluate acid insensitivity in other taxa. In this study, we tested for functional convergence of NaV1.7 in 71 species of mammals, including 22 species that hibernate. Our analyses revealed a functional convergence of amino acid sequences, which occurred at least six times independently in mammals that hibernate. Evolutionary analyses determined that the convergence results from both parallel and divergent evolution of residues in the functional motif. Our findings not only identify the functional molecules responsible for acid insensitivity in hibernating mammals, but also open new avenues to elucidate the molecular underpinnings of acid insensitivity in mammals. PMID:24352952
Repeated functional convergent effects of NaV1.7 on acid insensitivity in hibernating mammals.
Liu, Zhen; Wang, Wei; Zhang, Tong-Zuo; Li, Gong-Hua; He, Kai; Huang, Jing-Fei; Jiang, Xue-Long; Murphy, Robert W; Shi, Peng
2014-02-07
Hibernating mammals need to be insensitive to acid in order to cope with conditions of high CO2; however, the molecular basis of acid tolerance remains largely unknown. The African naked mole-rat (Heterocephalus glaber) and hibernating mammals share similar environments and physiological features. In the naked mole-rat, acid insensitivity has been shown to be conferred by the functional motif of the sodium ion channel NaV1.7. There is now an opportunity to evaluate acid insensitivity in other taxa. In this study, we tested for functional convergence of NaV1.7 in 71 species of mammals, including 22 species that hibernate. Our analyses revealed a functional convergence of amino acid sequences, which occurred at least six times independently in mammals that hibernate. Evolutionary analyses determined that the convergence results from both parallel and divergent evolution of residues in the functional motif. Our findings not only identify the functional molecules responsible for acid insensitivity in hibernating mammals, but also open new avenues to elucidate the molecular underpinnings of acid insensitivity in mammals.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
NASA Astrophysics Data System (ADS)
Ebrahimi, Mehdi; Jahangirian, Alireza
2017-12-01
An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier-Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.
Ryu, Ik Hee; Han, Jinu; Lee, Hyung Keun; Kim, Jin Kook
2014-01-01
Purpose To evaluate the change of accommodation-convergence parameters after implantation of Artisan phakic intraocular lens (PIOL). Methods Prospective study for the patients with the Artisan PIOL implantation was performed. A total of 37 patients (3 males and 34 females) enrolled the study. Preoperatively, convergence amplitude, the stimulus accommodative convergence per unit of accommodation (AC/A) ratio and the near point of convergence (NPC) were evaluated. After the Artisan PIOL implantation, the identical evaluations were repeated at 1 week, 1, 3, and 6 months after the surgery. Results Mean age was 24.3 ± 4.8 years old, and preoperative refractive error was -8.92 ± 4.13 diopters (D). After the implantation, mean refractive errors significantly decreased to within ±1.00 D, and noticeable complications were not found. The convergence amplitude and the stimulus AC/A ratio increased 1 month after the surgery, but progressively stabilized afterward to near preoperative values. NPC didn't show any significant change over follow-up period up to 6 months. Conclusions These results regarding implantation of the Artisan PIOL revealed the increase of accommodation-convergence relationship within first 1 month after the surgery, but progressive stabilization was noted during follow-up periods. PMID:24688257
NASA Technical Reports Server (NTRS)
Robertson, F. R.
1984-01-01
The role of cloud related diabatic processes in maintaining the structure of the South Pacific Convergence Zone is discussed. The method chosen to evaluate the condensational heating is a diagnostic cumulus mass flux technique which uses GOES digital IR data to characterize the cloud population. This method requires as input an estimate of time/area mean rainfall rate over the area in question. Since direct observation of rainfall in the South Pacific is not feasible, a technique using GOES IR data is being developed to estimate rainfall amounts for a 2.5 degree grid at 12h intervals.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Evolutionary Construction of Block-Based Neural Networks in Consideration of Failure
NASA Astrophysics Data System (ADS)
Takamori, Masahito; Koakutsu, Seiichi; Hamagami, Tomoki; Hirata, Hironori
In this paper we propose a modified gene coding and an evolutionary construction in consideration of failure in evolutionary construction of Block-Based Neural Networks. In the modified gene coding, we arrange the genes of weights on a chromosome in consideration of the position relation of the genes of weight and structure. By the modified gene coding, the efficiency of search by crossover is increased. Thereby, it is thought that improvement of the convergence rate of construction and shortening of construction time can be performed. In the evolutionary construction in consideration of failure, the structure which is adapted for failure is built in the state where failure occured. Thereby, it is thought that BBNN can be reconstructed in a short time at the time of failure. To evaluate the proposed method, we apply it to pattern classification and autonomous mobile robot control problems. The computational experiments indicate that the proposed method can improve convergence rate of construction and shorten of construction and reconstruction time.
Gullà, F; Zambelli, P; Bergamaschi, A; Piccoli, B
2007-01-01
The aim of this study is the objective evaluation of the visual effort in 6 public traffic controllers (4 male, 2 female, mean age 29,6), by means of electronic equipment. The electronic equipment quantify the observation distance and the observation time for each controller's occupational visual field. The quantification of these parameters is obtained by the emission of ultrasound at 40 KHz from an emission sensor (placed by the VDT screen) and the ultrasound reception by means of a receiving sensor (placed on the operator's head). The travelling time of the ultrasound (US), as the air speed is known and costant (about 340 m/s), it is used to calculate the distance between the emitting and the receiving sensor. The results show that the visual acuity required is of average level, while accommodation's and convergence's effort vary from average to intense (depending on the visual characteristics of the operator considered), ranging from 26,41 and 43,92% of accommodation and convergence available in each operator. The time actually spent in "near observation within the c.v.p." (Tscr) was maintained in a range from 2h 54' and 4h 05'.
NASA Astrophysics Data System (ADS)
Fu, Junjie; Wang, Jin-zhi
2017-09-01
In this paper, we study the finite-time consensus problems with globally bounded convergence time also known as fixed-time consensus problems for multi-agent systems subject to directed communication graphs. Two new distributed control strategies are proposed such that leaderless and leader-follower consensus are achieved with convergence time independent on the initial conditions of the agents. Fixed-time formation generation and formation tracking problems are also solved as the generalizations. Simulation examples are provided to demonstrate the performance of the new controllers.
Efficient genetic algorithms using discretization scheduling.
McLay, Laura A; Goldberg, David E
2005-01-01
In many applications of genetic algorithms, there is a tradeoff between speed and accuracy in fitness evaluations when evaluations use numerical methods with varying discretization. In these types of applications, the cost and accuracy vary from discretization errors when implicit or explicit quadrature is used to estimate the function evaluations. This paper examines discretization scheduling, or how to vary the discretization within the genetic algorithm in order to use the least amount of computation time for a solution of a desired quality. The effectiveness of discretization scheduling can be determined by comparing its computation time to the computation time of a GA using a constant discretization. There are three ingredients for the discretization scheduling: population sizing, estimated time for each function evaluation and predicted convergence time analysis. Idealized one- and two-dimensional experiments and an inverse groundwater application illustrate the computational savings to be achieved from using discretization scheduling.
Multiscale Reconstruction for Magnetic Resonance Fingerprinting
Pierre, Eric Y.; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A.
2015-01-01
Purpose To reduce acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. Methods An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in-vivo data using the highly-undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. Results The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD) and B0 field variations in the brain was achieved in vivo for a 256×256 matrix for a total acquisition time of 10.2s, representing a 3-fold reduction in acquisition time. Conclusions The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. PMID:26132462
An experimental analysis on OSPF-TE convergence time
NASA Astrophysics Data System (ADS)
Huang, S.; Kitayama, K.; Cugini, F.; Paolucci, F.; Giorgetti, A.; Valcarenghi, L.; Castoldi, P.
2008-11-01
Open shortest path first (OSPF) protocol is commonly used as an interior gateway protocol (IGP) in MPLS and generalized MPLS (GMPLS) networks to determine the topology over which label-switched paths (LSPs) can be established. Traffic-engineering extensions (network states such as link bandwidth information, available wavelengths, signal quality, etc) have been recently enabled in OSPF (henceforth, called OSPF-TE) to support shortest path first (SPF) tree calculation upon different purposes, thus possibly achieving optimal path computation and helping improve resource utilization efficiency. Adding these features into routing phase can exploit the OSPF robustness, and no additional network component is required to manage the traffic-engineering information. However, this traffic-engineering enhancement also complicates OSPF behavior. Since network states change frequently upon the dynamic trafficengineered LSP setup and release, the network is easily driven from a stable state to unstable operating regimes. In this paper, we focus on studying the OSPF-TE stability in terms of convergence time. Convergence time is referred to the time spent by the network to go back to steady states upon any network state change. An external observation method (based on black-box method) is employed to estimate the convergence time. Several experimental test-beds are developed to emulate dynamic LSP setup/release, re-routing upon single-link failure. The experimental results show that with OSPF-TE the network requires more time to converge compared to the conventional OSPF protocol without TE extension. Especially, in case of wavelength-routed optical network (WRON), introducing per wavelength availability and wavelength continuity constraint to OSPF-TE suffers severe convergence time and a large number of advertised link state advertisements (LSAs). Our study implies that long convergence time and large number of LSAs flooded in the network might cause scalability problems in OSPF-TE and impose limitations on OSPF-TE applications. New solutions to mitigate the s convergence time and to reduce the amount of state information are desired in the future.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru
2018-03-01
This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.
Daigneault, Pierre-Marc; Jacob, Steve; Tremblay, Joël
2012-08-01
Stakeholder participation is an important trend in the field of program evaluation. Although a few measurement instruments have been proposed, they either have not been empirically validated or do not cover the full content of the concept. This study consists of a first empirical validation of a measurement instrument that fully covers the content of participation, namely the Participatory Evaluation Measurement Instrument (PEMI). It specifically examines (1) the intercoder reliability of scores derived by two research assistants on published evaluation cases; (2) the convergence between the scores of coders and those of key respondents (i.e., authors); and (3) the convergence between the authors' scores on the PEMI and the Evaluation Involvement Scale (EIS). A purposive sample of 40 cases drawn from the evaluation literature was used to assess reliability. One author per case in this sample was then invited to participate in a survey; 25 fully usable questionnaires were received. Stakeholder participation was measured on nominal and ordinal scales. Cohen's κ, the intraclass correlation coefficient, and Spearman's ρ were used to assess reliability and convergence. Reliability results ranged from fair to excellent. Convergence between coders' and authors' scores ranged from poor to good. Scores derived from the PEMI and the EIS were moderately associated. Evidence from this study is strong in the case of intercoder reliability and ranges from weak to strong in the case of convergent validation. Globally, this suggests that the PEMI can produce scores that are both reliable and valid.
Allen, Felicity; Montgomery, Stephen; Maruszczak, Maciej; Kusel, Jeanette; Adlard, Nicholas
2015-09-01
Several disease-modifying therapies have marketing authorizations for the treatment of relapsing-remitting multiple sclerosis (RRMS). Given their appraisal by the National Institute for Health and Care Excellence, the objective was to systematically identify and critically evaluate the structures and assumptions used in health economic models of disease-modifying therapies for RRMS in the United Kingdom. Embase, MEDLINE, The Cochrane Library, and the National Institute for Health and Care Excellence Web site were searched systematically on March 3, 2014, to identify articles relating to health economic models in RRMS with a UK perspective. Data sources, techniques, and assumptions of the included models were extracted, compared, and critically evaluated. Of 386 results, 26 full texts were evaluated, leading to the inclusion of 18 articles (relating to 12 models). Early models varied considerably in method and structure, but convergence over time toward a Markov model with states based on disability score, a 1-year cycle length, and a lifetime time horizon was apparent. Recent models also allowed for disability improvement within the natural history of the condition. Considerable variety remains, with increasing numbers of comparators, the need for treatment sequencing, and different assumptions around efficacy waning and treatment withdrawal. Despite convergence over time to a similar Markov structure, there are still significant discrepancies between health economic models of RRMS in the United Kingdom. Differing methods, assumptions, and data sources render the comparison of model implementation and results problematic. The commonly used Markov structure leads to problems such as incapability to deal with heterogeneous populations and multiplying complexity with the addition of treatment sequences; these would best be solved by using alternative models such as discrete event simulations. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Convergence Acceleration for Multistage Time-Stepping Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli L.; Rossow, C-C; Vasta, V. N.
2006-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 could be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. Numerical dissipation operators (based on the Roe scheme, a matrix formulation, and the CUSP scheme) as well as the number of RK stages are considered in evaluating the RK/implicit scheme. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. In two dimensions, turbulent flows over an airfoil at subsonic and transonic conditions are computed. The effects of mesh cell aspect ratio on convergence are investigated for Reynolds numbers between 5.7 x 10(exp 6) and 100.0 x 10(exp 6). Results are also obtained for a transonic wing flow. For both 2-D and 3-D problems, the computational time of a well-tuned standard RK scheme is reduced at least a factor of four.
Bessette, Katie L; Jenkins, Lisanne M; Skerrett, Kristy A; Gowins, Jennifer R; DelDonno, Sophie R; Zubieta, Jon-Kar; McInnis, Melvin G; Jacobs, Rachel H; Ajilore, Olusola; Langenecker, Scott A
2018-01-01
There is substantial variability across studies of default mode network (DMN) connectivity in major depressive disorder, and reliability and time-invariance are not reported. This study evaluates whether DMN dysconnectivity in remitted depression (rMDD) is reliable over time and symptom-independent, and explores convergent relationships with cognitive features of depression. A longitudinal study was conducted with 82 young adults free of psychotropic medications (47 rMDD, 35 healthy controls) who completed clinical structured interviews, neuropsychological assessments, and 2 resting-state fMRI scans across 2 study sites. Functional connectivity analyses from bilateral posterior cingulate and anterior hippocampal formation seeds in DMN were conducted at both time points within a repeated-measures analysis of variance to compare groups and evaluate reliability of group-level connectivity findings. Eleven hyper- (from posterior cingulate) and 6 hypo- (from hippocampal formation) connectivity clusters in rMDD were obtained with moderate to adequate reliability in all but one cluster (ICC's range = 0.50 to 0.76 for 16 of 17). The significant clusters were reduced with a principle component analysis (5 components obtained) to explore these connectivity components, and were then correlated with cognitive features (rumination, cognitive control, learning and memory, and explicit emotion identification). At the exploratory level, for convergent validity, components consisting of posterior cingulate with cognitive control network hyperconnectivity in rMDD were related to cognitive control (inverse) and rumination (positive). Components consisting of anterior hippocampal formation with social emotional network and DMN hypoconnectivity were related to memory (inverse) and happy emotion identification (positive). Thus, time-invariant DMN connectivity differences exist early in the lifespan course of depression and are reliable. The nuanced results suggest a ventral within-network hypoconnectivity associated with poor memory and a dorsal cross-network hyperconnectivity linked to poorer cognitive control and elevated rumination. Study of early course remitted depression with attention to reliability and symptom independence could lead to more readily translatable clinical assessment tools for biomarkers.
Berker, Yannick; Karp, Joel S; Schulz, Volkmar
2017-09-01
The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.
Gain and movement time of convergence-accommodation in preschool children.
Suryakumar, R; Bobier, W R
2004-11-01
Convergence-accommodation is the synkinetic change in accommodation driven by vergence. A few studies have investigated the static and dynamic properties of this cross-link in adults but little is known about convergence-accommodation in children. The purpose of this study was to develop a technique for measuring convergence-accommodation and to study its dynamics (gain and movement time) in a sample of pre-school children. Convergence-accommodation measures were examined on thiry-seven normal pre-school children (mean age = 4.0 +/- 1.31 yrs). Stimulus CA/C (sCA/C) ratios and movement time measures of convergence-accommodation were assessed using a photorefractor while subjects viewed a DOG target. Repeated measures were obtained on eight normal adults (mean age = 23 +/- 0.2 yrs). The mean sCA/C ratios and movement times were not significantly different between adults and children (0.10 D/Delta [0.61 D/M.A.], 743 +/- 70 ms and 0.11 D/Delta [0.50 D/M.A.], 787 +/- 216 ms). Repeated measures on adults showed a non-significant mean difference of 0.001 D/Delta. The results suggest that the possible differences in crystalline lens (plant) characteristics between children and adults do not appear to influence convergence-accommodation gain or duration.
ConvAn: a convergence analyzing tool for optimization of biochemical networks.
Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils
2012-01-01
Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Complete convergence of randomly weighted END sequences and its application.
Li, Penghua; Li, Xiaoqin; Wu, Kehan
2017-01-01
We investigate the complete convergence of partial sums of randomly weighted extended negatively dependent (END) random variables. Some results of complete moment convergence, complete convergence and the strong law of large numbers for this dependent structure are obtained. As an application, we study the convergence of the state observers of linear-time-invariant systems. Our results extend the corresponding earlier ones.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
Byzantine-fault tolerant self-stabilizing protocol for distributed clock synchronization systems
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R. (Inventor)
2010-01-01
A rapid Byzantine self-stabilizing clock synchronization protocol that self-stabilizes from any state, tolerates bursts of transient failures, and deterministically converges within a linear convergence time with respect to the self-stabilization period. Upon self-stabilization, all good clocks proceed synchronously. The Byzantine self-stabilizing clock synchronization protocol does not rely on any assumptions about the initial state of the clocks. Furthermore, there is neither a central clock nor an externally generated pulse system. The protocol converges deterministically, is scalable, and self-stabilizes in a short amount of time. The convergence time is linear with respect to the self-stabilization period.
Unified gas-kinetic scheme with multigrid convergence for rarefied flow study
NASA Astrophysics Data System (ADS)
Zhu, Yajun; Zhong, Chengwen; Xu, Kun
2017-09-01
The unified gas kinetic scheme (UGKS) is based on direct modeling of gas dynamics on the mesh size and time step scales. With the modeling of particle transport and collision in a time-dependent flux function in a finite volume framework, the UGKS can connect the flow physics smoothly from the kinetic particle transport to the hydrodynamic wave propagation. In comparison with the direct simulation Monte Carlo (DSMC) method, the current equation-based UGKS can implement implicit techniques in the updates of macroscopic conservative variables and microscopic distribution functions. The implicit UGKS significantly increases the convergence speed for steady flow computations, especially in the highly rarefied and near continuum regimes. In order to further improve the computational efficiency, for the first time, a geometric multigrid technique is introduced into the implicit UGKS, where the prediction step for the equilibrium state and the evolution step for the distribution function are both treated with multigrid acceleration. More specifically, a full approximate nonlinear system is employed in the prediction step for fast evaluation of the equilibrium state, and a correction linear equation is solved in the evolution step for the update of the gas distribution function. As a result, convergent speed has been greatly improved in all flow regimes from rarefied to the continuum ones. The multigrid implicit UGKS (MIUGKS) is used in the non-equilibrium flow study, which includes microflow, such as lid-driven cavity flow and the flow passing through a finite-length flat plate, and high speed one, such as supersonic flow over a square cylinder. The MIUGKS shows 5-9 times efficiency increase over the previous implicit scheme. For the low speed microflow, the efficiency of MIUGKS is several orders of magnitude higher than the DSMC. Even for the hypersonic flow at Mach number 5 and Knudsen number 0.1, the MIUGKS is still more than 100 times faster than the DSMC method for obtaining a convergent steady state solution.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Janikowski, Timothy P.; And Others
1990-01-01
Examined construct validity of Microcomputer Evaluation Screening and Assessment (MESA) Interest Survey. Administered MESA and United States Employment Service (USES) Interest Inventory to 74 volunteer rehabilitation clients. Evidence supported convergent and discriminant validity of MESA. Found fewer significant intercorrelations among MESA…
NASA Astrophysics Data System (ADS)
Regalla, Christine
Here we investigate the relationships between outer forearc subsidence, the timing and kinematics of upper plate deformation and plate convergence rate in Northeast Japan to evaluate the role of plate boundary dynamics in driving forearc subsidence. The Northeastern Japan margin is one of the first non-accretionary subduction zones where regional forearc subsidence was argued to reflect tectonic erosion of large volumes of upper crustal rocks. However, we propose that a significant component of forearc subsidence could be the result of dynamic changes in plate boundary geometry. We provide new constraints on the timing and kinematics of deformation along inner forearc faults, new analyses of the evolution of outer forearc tectonic subsidence, and updated calculations of plate convergence rate. These data collectively reveal a temporal correlation between the onset of regional forearc subsidence, the initiation of upper plate extension, and an acceleration in local plate convergence rate. A similar analysis of the kinematic evolution of the Tonga, Izu-Bonin, and Mariana subduction zones indicates that the temporal correlations observed in Japan are also characteristic of these three non-accretionary margins. Comparison of these data with published geodynamic models suggests that forearc subsidence is the result of temporal variability in slab geometry due to changes in slab buoyancy and plate convergence rate. These observations suggest that a significant component of forearc subsidence at these four margins is not the product of tectonic erosion, but instead reflects changes in plate boundary dynamics driven by variable plate kinematics.
Scattering of cylindrical electric field waves from an elliptical dielectric cylindrical shell
NASA Astrophysics Data System (ADS)
Urbanik, E. A.
1982-12-01
This thesis examines the scattering of cylindrical waves by large dielectric scatterers of elliptic cross section. The solution method was the method of moments using a Galerkin approach. Sinusoidal basis and testing functions were used resulting in a higher convergence rate. The higher rate of convergence made it possible for the program to run on the Aeronautical Systems Division's CYBER computers without any special storage methods. This report includes discussion on moment methods, solution of integral equations, and the relationship between the electric field and the source region or self cell singularity. Since the program produced unacceptable run times, no results are contained herein. The importance of this work is the evaluation of the practicality of moment methods using standard techniques. The long run times for a mid-sized scatterer demonstrate the impracticality of moment methods for dielectrics using standard techniques.
Xiao, Lin; Liao, Bolin; Li, Shuai; Chen, Ke
2018-02-01
In order to solve general time-varying linear matrix equations (LMEs) more efficiently, this paper proposes two nonlinear recurrent neural networks based on two nonlinear activation functions. According to Lyapunov theory, such two nonlinear recurrent neural networks are proved to be convergent within finite-time. Besides, by solving differential equation, the upper bounds of the finite convergence time are determined analytically. Compared with existing recurrent neural networks, the proposed two nonlinear recurrent neural networks have a better convergence property (i.e., the upper bound is lower), and thus the accurate solutions of general time-varying LMEs can be obtained with less time. At last, various different situations have been considered by setting different coefficient matrices of general time-varying LMEs and a great variety of computer simulations (including the application to robot manipulators) have been conducted to validate the better finite-time convergence of the proposed two nonlinear recurrent neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Convergent and Discriminant Validation of Student Ratings of College Instructors.
ERIC Educational Resources Information Center
Hillery, Joseph M.; Yukl, Gary A.
This paper reports the results of a validation study of data obtained from a teacher rating survey conducted by the University of Akron Student Council during the Fall 1969. The rating questionnaire consisted of 14 times: two items measured the student's overall evaluation of his instructor; 5 items measured specific performance dimensions such as…
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation
NASA Astrophysics Data System (ADS)
Blumenthal, Benjamin J.; Zhan, Hongbin
2016-08-01
We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.
μ-tempered metadynamics: Artifact independent convergence times for wide hills
NASA Astrophysics Data System (ADS)
Dickson, Bradley M.
2015-12-01
Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60∘ for μTmetaD.
μ-tempered metadynamics: Artifact independent convergence times for wide hills.
Dickson, Bradley M
2015-12-21
Recent analysis of well-tempered metadynamics (WTmetaD) showed that it converges without mollification artifacts in the bias potential. Here, we explore how metadynamics heals mollification artifacts, how healing impacts convergence time, and whether alternative temperings may be used to improve efficiency. We introduce "μ-tempered" metadynamics as a simple tempering scheme, inspired by a related mollified adaptive biasing potential, that results in artifact independent convergence of the free energy estimate. We use a toy model to examine the role of artifacts in WTmetaD and solvated alanine dipeptide to compare the well-tempered and μ-tempered frameworks demonstrating fast convergence for hill widths as large as 60(∘) for μTmetaD.
Evaluation of the Laplace Integral. Classroom Notes
ERIC Educational Resources Information Center
Chen, Hongwei
2004-01-01
Based on the dominated convergence theorem and parametric differentiation, two different evaluations of the Laplace integral are displayed. This article presents two different proofs of (1) which may be of interest since they are based on principles within the realm of real analysis. The first method applies the dominated convergence theorem to…
AMOBH: Adaptive Multiobjective Black Hole Algorithm.
Wu, Chong; Wu, Tao; Fu, Kaiyuan; Zhu, Yuan; Li, Yongbo; He, Wangyong; Tang, Shengwen
2017-01-01
This paper proposes a new multiobjective evolutionary algorithm based on the black hole algorithm with a new individual density assessment (cell density), called "adaptive multiobjective black hole algorithm" (AMOBH). Cell density has the characteristics of low computational complexity and maintains a good balance of convergence and diversity of the Pareto front. The framework of AMOBH can be divided into three steps. Firstly, the Pareto front is mapped to a new objective space called parallel cell coordinate system. Then, to adjust the evolutionary strategies adaptively, Shannon entropy is employed to estimate the evolution status. At last, the cell density is combined with a dominance strength assessment called cell dominance to evaluate the fitness of solutions. Compared with the state-of-the-art methods SPEA-II, PESA-II, NSGA-II, and MOEA/D, experimental results show that AMOBH has a good performance in terms of convergence rate, population diversity, population convergence, subpopulation obtention of different Pareto regions, and time complexity to the latter in most cases.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
NASA Astrophysics Data System (ADS)
Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu
2018-04-01
The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.
Convergence in Library and Museum Studies Education: Playing around with Curriculum?
ERIC Educational Resources Information Center
Martens, Marianne; Latham, K. F.
2016-01-01
In the case of libraries, archives, and museums (LAMs), the concept of convergence has become commonplace in recent time. Convergence addresses both physical spaces and the services provided. What is currently known as convergence within these institutions, should perhaps more accurately be described as reconvergence, as "in the late 1800s…
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis
2013-08-13
All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.
Multiscale reconstruction for MR fingerprinting.
Pierre, Eric Y; Ma, Dan; Chen, Yong; Badve, Chaitra; Griswold, Mark A
2016-06-01
To reduce the acquisition time needed to obtain reliable parametric maps with Magnetic Resonance Fingerprinting. An iterative-denoising algorithm is initialized by reconstructing the MRF image series at low image resolution. For subsequent iterations, the method enforces pixel-wise fidelity to the best-matching dictionary template then enforces fidelity to the acquired data at slightly higher spatial resolution. After convergence, parametric maps with desirable spatial resolution are obtained through template matching of the final image series. The proposed method was evaluated on phantom and in vivo data using the highly undersampled, variable-density spiral trajectory and compared with the original MRF method. The benefits of additional sparsity constraints were also evaluated. When available, gold standard parameter maps were used to quantify the performance of each method. The proposed approach allowed convergence to accurate parametric maps with as few as 300 time points of acquisition, as compared to 1000 in the original MRF work. Simultaneous quantification of T1, T2, proton density (PD), and B0 field variations in the brain was achieved in vivo for a 256 × 256 matrix for a total acquisition time of 10.2 s, representing a three-fold reduction in acquisition time. The proposed iterative multiscale reconstruction reliably increases MRF acquisition speed and accuracy. Magn Reson Med 75:2481-2492, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jia, Weile; Lin, Lin
2017-10-01
Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.
Jia, Weile; Lin, Lin
2017-10-14
Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
ERIC Educational Resources Information Center
Santelices, Maria Veronica; Taut, Sandy
2011-01-01
This paper describes convergent validity evidence regarding the mandatory, standards-based Chilean national teacher evaluation system (NTES). The study examined whether NTES identifies--and thereby rewards or punishes--the "right" teachers as high- or low-performing. We collected in-depth teaching performance data on a sample of 58…
Reformulation of Possio's kernel with application to unsteady wind tunnel interference
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1980-01-01
An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.
Optimization algorithms for large-scale multireservoir hydropower systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hiew, K.L.
Five optimization algorithms were vigorously evaluated based on applications on a hypothetical five-reservoir hydropower system. These algorithms are incremental dynamic programming (IDP), successive linear programing (SLP), feasible direction method (FDM), optimal control theory (OCT) and objective-space dynamic programming (OSDP). The performance of these algorithms were comparatively evaluated using unbiased, objective criteria which include accuracy of results, rate of convergence, smoothness of resulting storage and release trajectories, computer time and memory requirements, robustness and other pertinent secondary considerations. Results have shown that all the algorithms, with the exception of OSDP converge to optimum objective values within 1.0% difference from one another.more » The highest objective value is obtained by IDP, followed closely by OCT. Computer time required by these algorithms, however, differ by more than two orders of magnitude, ranging from 10 seconds in the case of OCT to a maximum of about 2000 seconds for IDP. With a well-designed penalty scheme to deal with state-space constraints, OCT proves to be the most-efficient algorithm based on its overall performance. SLP, FDM, and OCT were applied to the case study of Mahaweli project, a ten-powerplant system in Sri Lanka.« less
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
Does the Finger-to-Nose Test measure upper limb coordination in chronic stroke?
Rodrigues, Marcos R M; Slimovitch, Matthew; Chilingaryan, Gevorg; Levin, Mindy F
2017-01-23
We aimed to kinematically validate that the time to perform the Finger-to-Nose Test (FNT) assesses coordination by determining its construct, convergent and discriminant validity. Experimental, criterion standard study. Both clinical and experimental evaluations were done at a research facility in a rehabilitation hospital. Forty individuals (20 individuals with chronic stroke and 20 healthy, age- and gender-matched individuals) participated.. Both groups performed two blocks of 10 to-and-fro pointing movements (non-dominant/affected arm) between a sagittal target and the nose (ReachIn, ReachOut) at a self-paced speed. Time to perform the test was the main outcome. Kinematics (Optotrak, 100Hz) and clinical impairment/activity levels were evaluated. Spatiotemporal coordination was assessed with slope (IJC) and cross-correlation (LAG) between elbow and shoulder movements. Compared to controls, individuals with stroke (Fugl-Meyer Assessment, FMA-UE: 51.9 ± 13.2; Box & Blocks, BBT: 72.1 ± 26.9%) made more curved endpoint trajectories using less shoulder horizontal-abduction. For construct validity, shoulder range (β = 0.127), LAG (β = 0.855) and IJC (β = -0.191) explained 82% of FNT-time variance for ReachIn and LAG (β = 0.971) explained 94% for ReachOut in patients with stroke. In contrast, only LAG explained 62% (β = 0.790) and 79% (β = 0.889) of variance for ReachIn and ReachOut respectively in controls. For convergent validity, FNT-time correlated with FMA-UE (r = -0.67, p < 0.01), FMA-Arm (r = -0.60, p = 0.005), biceps spasticity (r = 0.39, p < 0.05) and BBT (r = -0.56, p < 0.01). A cut-off time of 10.6 s discriminated between mild and moderate-to-severe impairment (discriminant validity). Each additional second represented 42% odds increase of greater impairment. For this version of the FNT, the time to perform the test showed construct, convergent and discriminant validity to measure UL coordination in stroke.
[Object Separation from Medical X-Ray Images Based on ICA].
Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun
2015-03-01
X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study
ERIC Educational Resources Information Center
Ogilvie, Emily; McCrudden, Matthew T.
2017-01-01
An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents…
An Evaluation of Multiple Single-Case Outcome Indicators Using Convergent Evidence Scaling
ERIC Educational Resources Information Center
McGill, Ryan J.; Busse, R. T.
2014-01-01
The purpose of this article is to evaluate the consistency of five single-case outcome indicators, used to assess response-to-intervention data from a pilot Tier 2 reading intervention that was implemented at an elementary school. Using convergent evidence scaling, the indicators were converted onto a common interpretive scale for each case…
A novel recurrent neural network with finite-time convergence for linear programming.
Liu, Qingshan; Cao, Jinde; Chen, Guanrong
2010-11-01
In this letter, a novel recurrent neural network based on the gradient method is proposed for solving linear programming problems. Finite-time convergence of the proposed neural network is proved by using the Lyapunov method. Compared with the existing neural networks for linear programming, the proposed neural network is globally convergent to exact optimal solutions in finite time, which is remarkable and rare in the literature of neural networks for optimization. Some numerical examples are given to show the effectiveness and excellent performance of the new recurrent neural network.
Why does continental convergence stop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hynes, A.
1985-01-01
Convergence between India and Asia slowed at 45 Ma when they collided, but continues today. This requires that substantial proportions of the Indian and/or Asian lithospheric mantle are still being subducted. The resulting slab-pull is probably comparable with that from complete lithospheric slabs and may promote continued continental convergence even after collision. Since descending lithospheric slabs are present at all collision zones at the time of collision such continued convergence may be general after continental collisions. It may cease only when there is a major (global) plate reorganization which results in new forces on the convergent continents that may counteractmore » the slab-pull. These inferences may be tested on the late Paleozoic collision between Gondwanaland and Laurasia. This is generally considered to have been complete by mid-Permian time (250 Ma). However, this may be only the time of docking of Gondwanaland with North America, not that of the cessation of convergence. Paleomagnetic polar-wander paths for the Gondwanide continents exhibit consistently greater latitudinal shifts from 250 Ma to 200 Ma than those of Laurasia when corrected for post-Triassic drift, suggesting that convergence continued through late Permian well into the Triassic. It may have been accommodated by crustal thickening under what is now the US Coastal Plain, or by strike-slip faulting. Convergence may have ceased only when Pangea began to fragment again, in which case the cause for its cessation may be related to the cause of continental fragmentation.« less
Marsh, Herbert W; Martin, Andrew J; Jackson, Susan
2010-08-01
Based on the Physical Self Description Questionnaire (PSDQ) normative archive (n = 1,607 Australian adolescents), 40 of 70 items were selected to construct a new short form (PSDQ-S). The PSDQ-S was evaluated in a new cross-validation sample of 708 Australian adolescents and four additional samples: 349 Australian elite-athlete adolescents, 986 Spanish adolescents, 395 Israeli university students, 760 Australian older adults. Across these six groups, the 11 PSDQ-S factors had consistently high reliabilities and invariant factor structures. Study 1, using a missing-by-design variation of multigroup invariance tests, showed invariance across 40 PSDQ-S items and 70 PSDQ items. Study 2 demonstrated factorial invariance over a 1-year interval (test-retest correlations .57-.90; Mdn = .77), and good convergent and discriminant validity in relation to time. Study 3 showed good and nearly identical support for convergent and discriminant validity of PSDQ and PSDQ-S responses in relation to two other physical self-concept instruments.
Li, Shao-Peng; Cadotte, Marc W; Meiners, Scott J; Pu, Zhichao; Fukami, Tadashi; Jiang, Lin
2016-09-01
Whether plant communities in a given region converge towards a particular stable state during succession has long been debated, but rarely tested at a sufficiently long time scale. By analysing a 50-year continuous study of post-agricultural secondary succession in New Jersey, USA, we show that the extent of community convergence varies with the spatial scale and species abundance classes. At the larger field scale, abundance-based dissimilarities among communities decreased over time, indicating convergence of dominant species, whereas incidence-based dissimilarities showed little temporal tend, indicating no sign of convergence. In contrast, plots within each field diverged in both species composition and abundance. Abundance-based successional rates decreased over time, whereas rare species and herbaceous plants showed little change in temporal turnover rates. Initial abandonment conditions only influenced community structure early in succession. Overall, our findings provide strong evidence for scale and abundance dependence of stochastic and deterministic processes over old-field succession. © 2016 John Wiley & Sons Ltd/CNRS.
Convergence and attractivity of memristor-based cellular neural networks with time delays.
Qin, Sitian; Wang, Jun; Xue, Xiaoping
2015-03-01
This paper presents theoretical results on the convergence and attractivity of memristor-based cellular neural networks (MCNNs) with time delays. Based on a realistic memristor model, an MCNN is modeled using a differential inclusion. The essential boundedness of its global solutions is proven. The state of MCNNs is further proven to be convergent to a critical-point set located in saturated region of the activation function, when the initial state locates in a saturated region. It is shown that the state convergence time period is finite and can be quantitatively estimated using given parameters. Furthermore, the positive invariance and attractivity of state in non-saturated regions are also proven. The simulation results of several numerical examples are provided to substantiate the results. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Michael, Scott; Steiman-Cameron, Thomas Y.; Durisen, Richard H.; Boley, Aaron C.
2012-02-01
We conduct a convergence study of a protostellar disk, subject to a constant global cooling time and susceptible to gravitational instabilities (GIs), at a time when heating and cooling are roughly balanced. Our goal is to determine the gravitational torques produced by GIs, the level to which transport can be represented by a simple α-disk formulation, and to examine fragmentation criteria. Four simulations are conducted, identical except for the number of azimuthal computational grid points used. A Fourier decomposition of non-axisymmetric density structures in cos (mphi), sin (mphi) is performed to evaluate the amplitudes Am of these structures. The Am , gravitational torques, and the effective Shakura & Sunyaev α arising from gravitational stresses are determined for each resolution. We find nonzero Am for all m-values and that Am summed over all m is essentially independent of resolution. Because the number of measurable m-values is limited to half the number of azimuthal grid points, higher-resolution simulations have a larger fraction of their total amplitude in higher-order structures. These structures act more locally than lower-order structures. Therefore, as the resolution increases the total gravitational stress decreases as well, leading higher-resolution simulations to experience weaker average gravitational torques than lower-resolution simulations. The effective α also depends upon the magnitude of the stresses, thus αeff also decreases with increasing resolution. Our converged αeff is consistent with predictions from an analytic local theory for thin disks by Gammie, but only over many dynamic times when averaged over a substantial volume of the disk.
Naming Game with Multiple Hearers
NASA Astrophysics Data System (ADS)
Li, Bing; Chen, Guanrong; Chow, Tommy W. S.
2013-05-01
A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.
Human-in-the-loop Bayesian optimization of wearable device parameters
Malcolm, Philippe; Speeckaert, Jozefien; Siviy, Christoper J.; Walsh, Conor J.; Kuindersma, Scott
2017-01-01
The increasing capabilities of exoskeletons and powered prosthetics for walking assistance have paved the way for more sophisticated and individualized control strategies. In response to this opportunity, recent work on human-in-the-loop optimization has considered the problem of automatically tuning control parameters based on realtime physiological measurements. However, the common use of metabolic cost as a performance metric creates significant experimental challenges due to its long measurement times and low signal-to-noise ratio. We evaluate the use of Bayesian optimization—a family of sample-efficient, noise-tolerant, and global optimization methods—for quickly identifying near-optimal control parameters. To manage experimental complexity and provide comparisons against related work, we consider the task of minimizing metabolic cost by optimizing walking step frequencies in unaided human subjects. Compared to an existing approach based on gradient descent, Bayesian optimization identified a near-optimal step frequency with a faster time to convergence (12 minutes, p < 0.01), smaller inter-subject variability in convergence time (± 2 minutes, p < 0.01), and lower overall energy expenditure (p < 0.01). PMID:28926613
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2014-03-01
The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.
Differential influence of asynchrony in early and late chronotypes on convergent thinking.
Simor, Péter; Polner, Bertalan
2017-01-01
Eveningness preference (late chronotype) was previously associated with different personality dimensions and thinking styles that were linked to creativity, suggesting that evening-type individuals tend to be more creative than the morning-types. Nevertheless, empirical data on the association between chronotype and creative performance is scarce and inconclusive. Moreover, cognitive processes related to creative thinking are influenced by other factors such as sleep and the time of testing. Therefore, our aim was to examine convergent and divergent thinking abilities in late and early chronotypes, taking into consideration the influence of asynchrony (optimal versus nonoptimal testing times) and sleep quality. We analyzed the data of 36 evening-type and 36 morning-type young, healthy adults who completed the Compound Remote Associates (CRAs) as a convergent and the Just suppose subtest of the Torrance Tests of Creative Thinking as a divergent thinking task within a time interval that did (n = 32) or did not (n = 40) overlap with their individually defined peak times. Chronotype was not directly associated with creative performance, but in case of the convergent thinking task an interaction between chronotype and asynchrony emerged. Late chronotypes who completed the test at subjectively nonoptimal times showed better performance than late chronotypes tested during their "peak" and early chronotypes tested at their peak or off-peak times. Although insomniac symptoms predicted lower scores in the convergent thinking task, the interaction between chronotype and asynchrony was independent of the effects of sleep quality or the general testing time. Divergent thinking was not predicted by chronotype, asynchrony or their interaction. Our findings indicate that asynchrony might have a beneficial influence on convergent thinking, especially in late chronotypes.
Deep Marginalized Sparse Denoising Auto-Encoder for Image Denoising
NASA Astrophysics Data System (ADS)
Ma, Hongqiang; Ma, Shiping; Xu, Yuelei; Zhu, Mingming
2018-01-01
Stacked Sparse Denoising Auto-Encoder (SSDA) has been successfully applied to image denoising. As a deep network, the SSDA network with powerful data feature learning ability is superior to the traditional image denoising algorithms. However, the algorithm has high computational complexity and slow convergence rate in the training. To address this limitation, we present a method of image denoising based on Deep Marginalized Sparse Denoising Auto-Encoder (DMSDA). The loss function of Sparse Denoising Auto-Encoder is marginalized so that it satisfies both sparseness and marginality. The experimental results show that the proposed algorithm can not only outperform SSDA in the convergence speed and training time, but also has better denoising performance than the current excellent denoising algorithms, including both the subjective and objective evaluation of image denoising.
NASA Technical Reports Server (NTRS)
Bui, Trong T.; Mankbadi, Reda R.
1995-01-01
Numerical simulation of a very small amplitude acoustic wave interacting with a shock wave in a quasi-1D convergent-divergent nozzle is performed using an unstructured finite volume algorithm with a piece-wise linear, least square reconstruction, Roe flux difference splitting, and second-order MacCormack time marching. First, the spatial accuracy of the algorithm is evaluated for steady flows with and without the normal shock by running the simulation with a sequence of successively finer meshes. Then the accuracy of the Roe flux difference splitting near the sonic transition point is examined for different reconstruction schemes. Finally, the unsteady numerical solutions with the acoustic perturbation are presented and compared with linear theory results.
Time-dependent variational principle in matrix-product state manifolds: Pitfalls and potential
NASA Astrophysics Data System (ADS)
Kloss, Benedikt; Lev, Yevgeny Bar; Reichman, David
2018-01-01
We study the applicability of the time-dependent variational principle in matrix-product state manifolds for the long time description of quantum interacting systems. By studying integrable and nonintegrable systems for which the long time dynamics are known we demonstrate that convergence of long time observables is subtle and needs to be examined carefully. Remarkably, for the disordered nonintegrable system we consider the long time dynamics are in good agreement with the rigorously obtained short time behavior and with previous obtained numerically exact results, suggesting that at least in this case, the apparent convergence of this approach is reliable. Our study indicates that, while great care must be exercised in establishing the convergence of the method, it may still be asymptotically accurate for a class of disordered nonintegrable quantum systems.
NASA Astrophysics Data System (ADS)
Kifonidis, K.; Müller, E.
2012-08-01
Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Cognitive Load Reduces Perceived Linguistic Convergence Between Dyads.
Abel, Jennifer; Babel, Molly
2017-09-01
Speech convergence is the tendency of talkers to become more similar to someone they are listening or talking to, whether that person is a conversational partner or merely a voice heard repeating words. To elucidate the nature of the mechanisms underlying convergence, this study uses different levels of task difficulty on speech convergence within dyads collaborating on a task. Dyad members had to build identical LEGO® constructions without being able to see each other's construction, and with each member having half of the instructions required to complete the construction. Three levels of task difficulty were created, with five dyads at each level (30 participants total). Task difficulty was also measured using completion time and error rate. Listeners who heard pairs of utterances from each dyad judged convergence to be occurring in the Easy condition and to a lesser extent in the Medium condition, but not in the Hard condition. Amplitude envelope acoustic similarity analyses of the same utterance pairs showed that convergence occurred in dyads with shorter completion times and lower error rates. Together, these results suggest that while speech convergence is a highly variable behavior, it may occur more in contexts of low cognitive load. The relevance of these results for the current automatic and socially-driven models of convergence is discussed.
NASA Astrophysics Data System (ADS)
Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.
2018-03-01
In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.
Logan, Heather; Wolfaardt, Johan; Boulanger, Pierre; Hodgetts, Bill; Seikaly, Hadi
2013-06-19
It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Fifteen important issues were extracted from the convergent interviews. In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field.
2013-01-01
Background It is important to understand the perceived value of surgical design and simulation (SDS) amongst surgeons, as this will influence its implementation in clinical settings. The purpose of the present study was to examine the application of the convergent interview technique in the field of surgical design and simulation and evaluate whether the technique would uncover new perceptions of virtual surgical planning (VSP) and medical models not discovered by other qualitative case-based techniques. Methods Five surgeons were asked to participate in the study. Each participant was interviewed following the convergent interview technique. After each interview, the interviewer interpreted the information by seeking agreements and disagreements among the interviewees in order to understand the key concepts in the field of SDS. Results Fifteen important issues were extracted from the convergent interviews. Conclusion In general, the convergent interview was an effective technique in collecting information about the perception of clinicians. The study identified three areas where the technique could be improved upon for future studies in the SDS field. PMID:23782771
Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.
Evaluating bump control techniques through convergence monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campoli, A.A.
1987-07-01
A coal mine bump is the violent failure of a pillar or pillars due to overstress. Retreat coal mining concentrates stresses on the pillars directly outby gob areas, and the situation becomes critical when mining a coalbed encased in rigid associated strata. Bump control techniques employed by the Olga Mine, McDowell County, WV, were evaluated through convergence monitoring in a Bureau of Mines study. Olga uses a novel pillar splitting mining method to extract 55-ft by 70-ft chain pillars, under 1,100 to 1,550 ft of overburden. Three rows of pillars are mined simultaneously to soften the pillar line and reducemore » strain energy storage capacity. Localized stress reduction (destressing) techniques, auger drilling and shot firing, induced approximately 0.1 in. of roof-to-floor convergence in ''high'' -stress pillars near the gob line. Auger drilling of a ''low''-stress pillar located between two barrier pillars produced no convergence effects.« less
On the Structure of the Present-Day Convergence
ERIC Educational Resources Information Center
Korotayev, Andrey; Zinkina, Julia
2014-01-01
Purpose: A substantial number of researchers have investigated the global economic dynamics of this time to disprove unconditional convergence and refute its very idea, stating the phenomenon of conditional convergence instead. However, most respective papers limit their investigation period with the early or mid-2000s. In the authors' opinion,…
Laws, Holly B.; Constantino, Michael J.; Sayer, Aline G.; Klein, Daniel N.; Kocsis, James H.; Manber, Rachel; Markowitz, John C.; Rothbaum, Barbara O.; Steidtmann, Dana; Thase, Michael E.; Arnow, Bruce A.
2016-01-01
Objective This study tested whether discrepancy between patients' and therapists' ratings of the therapeutic alliance, as well as convergence in their alliance ratings over time, predicted outcome in chronic depression treatment. Method Data derived from a controlled trial of partial or non-responders to open-label pharmacotherapy subsequently randomized to 12 weeks of algorithm-driven pharmacotherapy alone or pharmacotherapy plus psychotherapy (Kocsis et al., 2009). The current study focused on the psychotherapy conditions (N = 357). Dyadic multilevel modeling was used to assess alliance discrepancy and alliance convergence over time as predictors of two depression measures: one pharmacotherapist-rated (Quick Inventory of Depressive Symptoms-Clinician; QIDS-C), the other blind interviewer-rated (Hamilton Rating Scale for Depression; HAMD). Results Patients' and therapists' alliance ratings became more similar, or convergent, over the course of psychotherapy. Higher alliance convergence was associated with greater reductions in QIDS-C depression across psychotherapy. Alliance convergence was not significantly associated with declines in HAMD depression; however, greater alliance convergence was related to lower HAMD scores at 3-month follow-up. Conclusions The results partially support the hypothesis that increasing patient-therapist consensus on alliance quality during psychotherapy may improve treatment and longer-term outcomes. PMID:26829714
ERIC Educational Resources Information Center
Maljaars, Jarymke; Noens, Ilse; Scholte, Evert; van Berckelaer-Onnes, Ina
2012-01-01
The Diagnostic Interview for Social and Communication Disorders (DISCO; Wing, 2006) is a standardized, semi-structured and interviewer-based schedule for diagnosis of autism spectrum disorder (ASD). The objective of this study was to evaluate the criterion and convergent validity of the DISCO-11 ICD-10 algorithm in young and low-functioning…
Greenbaum, Gili
2015-09-07
Evaluation of the time scale of the fixation of neutral mutations is crucial to the theoretical understanding of the role of neutral mutations in evolution. Diffusion approximations of the Wright-Fisher model are most often used to derive analytic formulations of genetic drift, as well as for the time scales of the fixation of neutral mutations. These approximations require a set of assumptions, most notably that genetic drift is a stochastic process in a continuous allele-frequency space, an assumption appropriate for large populations. Here equivalent approximations are derived using a coalescent theory approach which relies on a different set of assumptions than the diffusion approach, and adopts a discrete allele-frequency space. Solutions for the mean and variance of the time to fixation of a neutral mutation derived from the two approaches converge for large populations but slightly differ for small populations. A Markov chain analysis of the Wright-Fisher model for small populations is used to evaluate the solutions obtained, showing that both the mean and the variance are better approximated by the coalescent approach. The coalescence approximation represents a tighter upper-bound for the mean time to fixation than the diffusion approximation, while the diffusion approximation and coalescence approximation form an upper and lower bound, respectively, for the variance. The converging solutions and the small deviations of the two approaches strongly validate the use of diffusion approximations, but suggest that coalescent theory can provide more accurate approximations for small populations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bischof, Martin; Obermann, Caitriona; Hartmann, Matthias N; Hager, Oliver M; Kirschner, Matthias; Kluge, Agne; Strauss, Gregory P; Kaiser, Stefan
2016-11-22
Negative symptoms are considered core symptoms of schizophrenia. The Brief Negative Symptom Scale (BNSS) was developed to measure this symptomatic dimension according to a current consensus definition. The present study examined the psychometric properties of the German version of the BNSS. To expand former findings on convergent validity, we employed the Temporal Experience Pleasure Scale (TEPS), a hedonic self-report that distinguishes between consummatory and anticipatory pleasure. Additionally, we addressed convergent validity with observer-rated assessment of apathy with the Apathy Evaluation Scale (AES), which was completed by the patient's primary nurse. Data were collected from 75 in- and outpatients from the Psychiatric Hospital, University Zurich diagnosed with either schizophrenia or schizoaffective disorder. We assessed convergent and discriminant validity, internal consistency and inter-rater reliability. We largely replicated the findings of the original version showing good psychometric properties of the BNSS. In addition, the primary nurses evaluation correlated moderately with interview-based clinician rating. BNSS anhedonia items showed good convergent validity with the TEPS. Overall, the German BNSS shows good psychometric properties comparable to the original English version. Convergent validity extends beyond interview-based assessments of negative symptoms to self-rated anhedonia and observer-rated apathy.
Relative Motion of the Nazca (farallon) and South American Plates Since Late Cretaceous Time
NASA Astrophysics Data System (ADS)
Pardo-Casas, Federico; Molnar, Peter
1987-06-01
By combining reconstructions of the South American and African plates, the African and Antarctic plates, the Antarctic and Pacific plates, and the Pacific and Nazca plates, we calculated the relative positions and history of convergence of the Nazca and South American plates. Despite variations in convergence rates along the Andes, periods of rapid convergence (averaging more than 100 mm/a) between the times of anomalies 21 (49.5 Ma) and 18 (42 Ma) and since anomaly 7 (26 Ma) coincide with two phases of relatively intense tectonic activity in the Peruvian Andes, known as the late Eocene Incaic and Mio-Pliocene Quechua phases. The periods of relatively slow convergence (50 to 55 ± 30 mm/a at the latitude of Peru and less farther south) between the times of anomalies 30-31 (68.5 Ma) and 21 and between those of anomalies 13 (36 Ma) and 7 correlate with periods during which tectonic activity was relatively quiescent. Thus these reconstructions provide quantitative evidence for a correlation of the intensity of tectonic activity in the overriding plate at subduction zones with variations in the convergence rate.
Tsuruta, S; Misztal, I; Strandén, I
2001-05-01
Utility of the preconditioned conjugate gradient algorithm with a diagonal preconditioner for solving mixed-model equations in animal breeding applications was evaluated with 16 test problems. The problems included single- and multiple-trait analyses, with data on beef, dairy, and swine ranging from small examples to national data sets. Multiple-trait models considered low and high genetic correlations. Convergence was based on relative differences between left- and right-hand sides. The ordering of equations was fixed effects followed by random effects, with no special ordering within random effects. The preconditioned conjugate gradient program implemented with double precision converged for all models. However, when implemented in single precision, the preconditioned conjugate gradient algorithm did not converge for seven large models. The preconditioned conjugate gradient and successive overrelaxation algorithms were subsequently compared for 13 of the test problems. The preconditioned conjugate gradient algorithm was easy to implement with the iteration on data for general models. However, successive overrelaxation requires specific programming for each set of models. On average, the preconditioned conjugate gradient algorithm converged in three times fewer rounds of iteration than successive overrelaxation. With straightforward implementations, programs using the preconditioned conjugate gradient algorithm may be two or more times faster than those using successive overrelaxation. However, programs using the preconditioned conjugate gradient algorithm would use more memory than would comparable implementations using successive overrelaxation. Extensive optimization of either algorithm can influence rankings. The preconditioned conjugate gradient implemented with iteration on data, a diagonal preconditioner, and in double precision may be the algorithm of choice for solving mixed-model equations when sufficient memory is available and ease of implementation is essential.
Setia, Maninder Singh; Quesnel-Vallee, Amelie; Abrahamowicz, Michal; Tousignant, Pierre; Lynch, John
2009-01-01
Recent immigrants typically have better physical health than the native born population. However, this 'healthy immigrant effect' tends to gradually wane over time, with increasing length of residence in the host country. To assess whether the body mass index (BMI) of different immigrant groups converged to the Canadian population's levels, we estimated 12-year trajectories of changes in BMI (accounting for socio-demographic changes). Using data from seven longitudinal waves of the National Population Health Survey (1994 through 2006), we compared the changes in BMI (kg/m(2)) among three groups: white immigrants, non-white immigrants and Canadian born, aged 18-54 at baseline. We applied linear random effects models to evaluate these BMI separately in 2,504 males and 2,960 females. BMI increased in Canadian born, white immigrants, and non-white immigrants over the 12-year period. However, non-white immigrants (males and females) had a lower mean BMI than Canadian born individuals during this period [Males: -2.27, 95% Confidence interval (CI) -3.02 to -1.53; Females: -1.84, 95% CI -2.79 to -0.90]. In contrast, the mean BMI in white male immigrants and Canadian born individuals were similar (-0.32, 95% CI -0.91 to 0.27). Even after adjusting for time since immigration, non-white immigrants had lower BMI than white immigrants. White male immigrants were the only sub-group to converge to the BMI of the Canadian born population. These results indicate that the loss of 'healthy immigrant effect' with regard to convergence of BMI to Canadian levels may not be experienced equally by all immigrants in Canada.
[Evaluation of Motion Sickness Induced by 3D Video Clips].
Matsuura, Yasuyuki; Takada, Hiroki
2016-01-01
The use of stereoscopic images has been spreading rapidly. Nowadays, stereoscopic movies are nothing new to people. Stereoscopic systems date back to 280 A.D. when Euclid first recognized the concept of depth perception by humans. Despite the increase in the production of three-dimensional (3D) display products and many studies on stereoscopic vision, the effect of stereoscopic vision on the human body has been insufficiently understood. However, symptoms such as eye fatigue and 3D sickness have been the concerns when viewing 3D films for a prolonged period of time; therefore, it is important to consider the safety of viewing virtual 3D contents as a contribution to society. It is generally explained to the public that accommodation and convergence are mismatched during stereoscopic vision and that this is the main reason for the visual fatigue and visually induced motion sickness (VIMS) during 3D viewing. We have devised a method to simultaneously measure lens accommodation and convergence. We used this simultaneous measurement device to characterize 3D vision. Fixation distance was compared between accommodation and convergence during the viewing of 3D films with repeated measurements. Time courses of these fixation distances and their distributions were compared in subjects who viewed 2D and 3D video clips. The results indicated that after 90 s of continuously viewing 3D images, the accommodative power does not correspond to the distance of convergence. In this paper, remarks on methods to measure the severity of motion sickness induced by viewing 3D films are also given. From the epidemiological viewpoint, it is useful to obtain novel knowledge for reduction and/or prevention of VIMS. We should accumulate empirical data on motion sickness, which may contribute to the development of relevant fields in science and technology.
NASA Astrophysics Data System (ADS)
Kumar, Love; Sharma, Vishal; Singh, Amarpal
2018-02-01
Wireless sensor networks have tremendous applications, such as civil, military, and environmental monitoring. In most of the applications, sensor data are required to be propagated over the internet/core networks, which result in backhaul setback. Subsequently, there is a necessity to backhaul the sensed information of such networks together with prolonging of the transmission link. Passive optical network (PON) is next-generation access technology emerging as a potential candidate for convergence of the sensed data to the core system. Earlier, the work with single-optical line terminal-PON was demonstrated and investigated merely analytically. This work is an attempt to demonstrate a practical model of a bidirectional single-sink wireless sensor network-PON converged network in which the collected data from cluster heads are transmitted over PON networks. Further, modeled converged structure has been investigated under the influence of double, single, and tandem sideband modulation schemes incorporating a corresponding phase-delay to the sensor data entities that have been overlooked in the past. The outcome illustrates the successful fusion of the sensor data entities over PON with acceptable bit error rate and signal to noise ratio serving as a potential development in the sphere of such converged networks. It has also been revealed that the data entities treated with tandem side band modulation scheme help in improving the performance of the converged structure. Additionally, analysis for uplink transmission reported with queue theory in terms of time cycle, average time delay, data packet generation, and bandwidth utilization. An analytical analysis of proposed converged network shows that average time delay for data packet transmission is less as compared with time cycle delay.
Safety performance evaluation of converging chevron pavement markings : final report.
DOT National Transportation Integrated Search
2014-12-01
The objectives of this study were (1) to perform a detailed safety analysis of converging chevron : pavement markings, quantifying the potential safety benefits and developing an understanding of the : incident types addressed by the treatment, and (...
Evaluation of the effectiveness of converging chevron pavement markings.
DOT National Transportation Integrated Search
2011-10-01
Converging chevron pavement markings have recently seen rising interest in the United States as a : means to reduce speeds at high-speed locations in a desire to improve safety performance. This report : presents an investigation into the effectivene...
Evaluating the effectiveness of converging chevron pavement markings.
DOT National Transportation Integrated Search
2010-10-01
Converging chevron pavement markings have recently seen rising interest in the United States as a : means to reduce speeds at high-speed locations in a desire to improve safety performance. This report : presents an investigation into the effectivene...
Vadose zone flow convergence test suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butcher, B. T.
Performance Assessment (PA) simulations for engineered disposal systems at the Savannah River Site involve highly contrasting materials and moisture conditions at and near saturation. These conditions cause severe convergence difficulties that typically result in unacceptable convergence or long simulation times or excessive analyst effort. Adequate convergence is usually achieved in a trial-anderror manner by applying under-relaxation to the Saturation or Pressure variable, in a series of everdecreasing RELAxation values. SRNL would like a more efficient scheme implemented inside PORFLOW to achieve flow convergence in a more reliable and efficient manner. To this end, a suite of test problems that illustratemore » these convergence problems is provided to facilitate diagnosis and development of an improved convergence strategy. The attached files are being transmitted to you describing the test problem and proposed resolution.« less
NASA Astrophysics Data System (ADS)
Weller, Evan; Jakob, Christian; Reeder, Michael
2017-04-01
Precipitation is often organized along coherent lines of low-level convergence, which at longer time and space scales form well-known convergence zones over the tropical oceans. Here, an automated, objective method is used to identify instantaneous low-level convergence lines in the current climate of CMIP5 models and compared with reanalysis data results. Identified convergence lines are combined with precipitation to assess the extent to which precipitation around the globe is associated with convergence in the lower troposphere. Differences between the current climate of the models and observations are diagnosed in terms of the frequency and intensity of both precipitation associated with convergence lines and that which is not. Future changes in frequency and intensity of convergence lines, and associated precipitation, are also investigated for their contribution to the simulated future changes in total precipitation.
Prentice, Ross L; Zhao, Shanshan
2018-01-01
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.
A multistage time-stepping scheme for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1985-01-01
A class of explicit multistage time-stepping schemes is used to construct an algorithm for solving the compressible Navier-Stokes equations. Flexibility in treating arbitrary geometries is obtained with a finite-volume formulation. Numerical efficiency is achieved by employing techniques for accelerating convergence to steady state. Computer processing is enhanced through vectorization of the algorithm. The scheme is evaluated by solving laminar and turbulent flows over a flat plate and an NACA 0012 airfoil. Numerical results are compared with theoretical solutions or other numerical solutions and/or experimental data.
Sliding mode control method having terminal convergence in finite time
NASA Technical Reports Server (NTRS)
Venkataraman, Subramanian T. (Inventor); Gulati, Sandeep (Inventor)
1994-01-01
An object of this invention is to provide robust nonlinear controllers for robotic operations in unstructured environments based upon a new class of closed loop sliding control methods, sometimes denoted terminal sliders, where the new class will enforce closed-loop control convergence to equilibrium in finite time. Improved performance results from the elimination of high frequency control switching previously employed for robustness to parametric uncertainties. Improved performance also results from the dependence of terminal slider stability upon the rate of change of uncertainties over the sliding surface rather than the magnitude of the uncertainty itself for robust control. Terminal sliding mode control also yields improved convergence where convergence time is finite and is to be controlled. A further object is to apply terminal sliders to robot manipulator control and benchmark performance with the traditional computed torque control method and provide for design of control parameters.
Discrete time learning control in nonlinear systems
NASA Technical Reports Server (NTRS)
Longman, Richard W.; Chang, Chi-Kuang; Phan, Minh
1992-01-01
In this paper digital learning control methods are developed primarily for use in single-input, single-output nonlinear dynamic systems. Conditions for convergence of the basic form of learning control based on integral control concepts are given, and shown to be satisfied by a large class of nonlinear problems. It is shown that it is not the gross nonlinearities of the differential equations that matter in the convergence, but rather the much smaller nonlinearities that can manifest themselves during the short time interval of one sample time. New algorithms are developed that eliminate restrictions on the size of the learning gain, and on knowledge of the appropriate sign of the learning gain, for convergence to zero error in tracking a feasible desired output trajectory. It is shown that one of the new algorithms can give guaranteed convergence in the presence of actuator saturation constraints, and indicate when the requested trajectory is beyond the actuator capabilities.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.
Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow
NASA Technical Reports Server (NTRS)
Howlett, J. T.
1985-01-01
An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.
NASA Astrophysics Data System (ADS)
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
Timing group delay and differential code bias corrections for BeiDou positioning
NASA Astrophysics Data System (ADS)
Guo, Fei; Zhang, Xiaohong; Wang, Jinling
2015-05-01
This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.
Method and apparatus for determining and utilizing a time-expanded decision network
NASA Technical Reports Server (NTRS)
de Weck, Olivier (Inventor); Silver, Matthew (Inventor)
2012-01-01
A method, apparatus and computer program for determining and utilizing a time-expanded decision network is presented. A set of potential system configurations is defined. Next, switching costs are quantified to create a "static network" that captures the difficulty of switching among these configurations. A time-expanded decision network is provided by expanding the static network in time, including chance and decision nodes. Minimum cost paths through the network are evaluated under plausible operating scenarios. The set of initial design configurations are iteratively modified to exploit high-leverage switches and the process is repeated to convergence. Time-expanded decision networks are applicable, but not limited to, the design of systems, products, services and contracts.
McCaffrey, Stacey A; Black, Ryan A; Butler, Stephen F
2018-03-01
The PainCAS is a web-based clinical tool for assessing and tracking pain and opioid risk in chronic pain patients. Despite evidence for its utility within the clinical setting, the PainCAS scales have never been subject to psychometric evaluation. The current study is the first to evaluate the psychometric properties of the PainCAS Interference with Daily Activities, Psychological/Emotional Distress, and Pain scales. Patients (N = 4797) from treatment centers and hospitals in 16 different states completed the PainCAS as part of routine clinical assessment. A subsample (n = 73) from two hospital-based treatment centers also completed comparator measures. Rasch Rating Scale Models were employed to evaluate the Interference with Daily Activities and Psychological/Emotional Distress scales, and empirical evaluation included assessment of dimensionality, discrimination, item fit, reliability, information, and person-to-item targeting. Additionally, convergent and discriminant validity were evaluated through classical test theory approaches. Convergent validity of the Pain scales was evaluated through correlations with corresponding comparator items. One Interference with Daily Activities item was removed due to poor functioning and discrimination. The retained items from the Interference with Daily Activities and Psychological/Emotional Distress scales conformed to unidimensional Rasch measurement models, yielding satisfactory item fit, reliability, precision, and coverage. Further, results provided support for the convergent and discriminant validity of these two scales. Convergent validity between the PainCAS Pain and BPI Pain items was also strong. Taken together, results provide strong psychometric support for these PainCAS Pain scales. Strengths and limitations of the current study are discussed.
Self-similar dynamic converging shocks - I. An isothermal gas sphere with self-gravity
NASA Astrophysics Data System (ADS)
Lou, Yu-Qing; Shi, Chun-Hui
2014-07-01
We explore novel self-similar dynamic evolution of converging spherical shocks in a self-gravitating isothermal gas under conceivable astrophysical situations. The construction of such converging shocks involves a time-reversal operation on feasible flow profiles in self-similar expansion with a proper care for the increasing direction of the specific entropy. Pioneered by Guderley since 1942 but without self-gravity so far, self-similar converging shocks are important for implosion processes in aerodynamics, combustion, and inertial fusion. Self-gravity necessarily plays a key role for grossly spherical structures in very broad contexts of astrophysics and cosmology, such as planets, stars, molecular clouds (cores), compact objects, planetary nebulae, supernovae, gamma-ray bursts, supernova remnants, globular clusters, galactic bulges, elliptical galaxies, clusters of galaxies as well as relatively hollow cavity or bubble structures on diverse spatial and temporal scales. Large-scale dynamic flows associated with such quasi-spherical systems (including collapses, accretions, fall-backs, winds and outflows, explosions, etc.) in their initiation, formation, and evolution are likely encounter converging spherical shocks at times. Our formalism lays an important theoretical basis for pertinent astrophysical and cosmological applications of various converging shock solutions and for developing and calibrating numerical codes. As examples, we describe converging shock triggered star formation, supernova explosions, and void collapses.
EVALUATION OF CONVERGENT SPRAY TECHNOLOGYTM SPRAY PROCESS FOR ROOF COATING APPLICATION
The overall goal of this project was to demonstrate the feasibility of Convergent Spray TechnologyTM for the roofing industry. This was accomplished by producing an environmentally compliant coating utilizing recycled materials, a CSTTM spray process portable application cart, a...
Linear perturbations of a Schwarzschild blackhole by thin disc - convergence
NASA Astrophysics Data System (ADS)
Čížek, P.; Semerák, O.
2012-07-01
In order to find the perturbation of a Schwarzschild space-time due to a rotating thin disc, we try to adjust the method used by [4] in the case of perturbation by a one-dimensional ring. This involves solution of stationary axisymmetric Einstein's equations in terms of spherical-harmonic expansions whose convergence however turned out questionable in numerical examples. Here we show, analytically, that the series are almost everywhere convergent, but in some regions the convergence is not absolute.
Linkographic Evidence for Concurrent Divergent and Convergent Thinking in Creative Design
ERIC Educational Resources Information Center
Goldschmidt, Gabriela
2016-01-01
For a long time, the creativity literature has stressed the role of divergent thinking in creative endeavor. More recently, it has been recognized that convergent thinking also has a role in creativity, and the design literature, which sees design as a creative activity a priori, has largely adopted this view: Divergent and convergent thinking are…
Generalization bounds of ERM-based learning processes for continuous-time Markov chains.
Zhang, Chao; Tao, Dacheng
2012-12-01
Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
Neural network for control of rearrangeable Clos networks.
Park, Y K; Cherkassky, V
1994-09-01
Rapid evolution in the field of communication networks requires high speed switching technologies. This involves a high degree of parallelism in switching control and routing performed at the hardware level. The multistage crossbar networks have always been attractive to switch designers. In this paper a neural network approach to controlling a three-stage Clos network in real time is proposed. This controller provides optimal routing of communication traffic requests on a call-by-call basis by rearranging existing connections, with a minimum length of rearrangement sequence so that a new blocked call request can be accommodated. The proposed neural network controller uses Paull's rearrangement algorithm, along with the special (least used) switch selection rule in order to minimize the length of rearrangement sequences. The functional behavior of our model is verified by simulations and it is shown that the convergence time required for finding an optimal solution is constant, regardless of the switching network size. The performance is evaluated for random traffic with various traffic loads. Simulation results show that applying the least used switch selection rule increases the efficiency in switch rearrangements, reducing the network convergence time. The implementation aspects are also discussed to show the feasibility of the proposed approach.
The Brave New World of Learning.
ERIC Educational Resources Information Center
Adkins, Sam S.
2003-01-01
Explores how old and new training technologies are converging. Discusses the concept of integrated applications and provides a taxonomy of convergent enterprise applications for use with real-time workflow. (JOW)
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1987-01-01
The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.
Near Point of Accommodation and Convergence after Photorefractive Keratectomy (PRK) for Myopia.
Hashemi, Hassancourtney; Samet, Behnaz; Mirzajani, Ali; Khabazkhoob, Mehdi; Rezvan, Bijan; Jafarzadehpur, Ebrahim
2013-01-01
Near point of convergence (NPC) and near point of accommodation (NPA) were evaluated before and after photorefractive keratectomy (PRK) in normal myopic eyes. In this prospective cross sectional study, NPC and NPA were measured in 120 myopic eyes (60 patients) before and 3 months after PRK. Excluding criteria were manifest tropia, previous eye surgery, amblyopia, and any other ocular pathology. All subjects were younger than35 years old. Fifty-one females (85%) and nine males (15%) participated in the study. The average age of the participants was 25.75 years. Before the operation, the average NPC and NPA were 4.35 cm and 6.9 cm (14.5 D), respectively. NPC and NPA increased significantly 5.63 (p = 0.025) and (p 0.05) to 7.983 cm (12.5 D) (p 0.001), respectively, after 3 months. NPC and NPA may increase significantly after PRK. Convergence and accommodation problems may affect near visual performance. Therefore, for any PRK candidate, accommodation and convergence should be evaluated.
Modelling and finite-time stability analysis of psoriasis pathogenesis
NASA Astrophysics Data System (ADS)
Oza, Harshal B.; Pandey, Rakesh; Roper, Daniel; Al-Nuaimi, Yusur; Spurgeon, Sarah K.; Goodfellow, Marc
2017-08-01
A new systems model of psoriasis is presented and analysed from the perspective of control theory. Cytokines are treated as actuators to the plant model that govern the cell population under the reasonable assumption that cytokine dynamics are faster than the cell population dynamics. The analysis of various equilibria is undertaken based on singular perturbation theory. Finite-time stability and stabilisation have been studied in various engineering applications where the principal paradigm uses non-Lipschitz functions of the states. A comprehensive study of the finite-time stability properties of the proposed psoriasis dynamics is carried out. It is demonstrated that the dynamics are finite-time convergent to certain equilibrium points rather than asymptotically or exponentially convergent. This feature of finite-time convergence motivates the development of a modified version of the Michaelis-Menten function, frequently used in biology. This framework is used to model cytokines as fast finite-time actuators.
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Effects of heterogeneous convergence rate on consensus in opinion dynamics
NASA Astrophysics Data System (ADS)
Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong
2018-06-01
The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Adaptive fixed-time trajectory tracking control of a stratospheric airship.
Zheng, Zewei; Feroskhan, Mir; Sun, Liang
2018-05-01
This paper addresses the fixed-time trajectory tracking control problem of a stratospheric airship. By extending the method of adding a power integrator to a novel adaptive fixed-time control method, the convergence of a stratospheric airship to its reference trajectory is guaranteed to be achieved within a fixed time. The control algorithm is firstly formulated without the consideration of external disturbances to establish the stability of the closed-loop system in fixed-time and demonstrate that the convergence time of the airship is essentially independent of its initial conditions. Subsequently, a smooth adaptive law is incorporated into the proposed fixed-time control framework to provide the system with robustness to external disturbances. Theoretical analyses demonstrate that under the adaptive fixed-time controller, the tracking errors will converge towards a residual set in fixed-time. The results of a comparative simulation study with other recent methods illustrate the remarkable performance and superiority of the proposed control method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Error field optimization in DIII-D using extremum seeking control
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.
2016-07-01
DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.
Mofid, Omid; Mobayen, Saleh
2018-01-01
Adaptive control methods are developed for stability and tracking control of flight systems in the presence of parametric uncertainties. This paper offers a design technique of adaptive sliding mode control (ASMC) for finite-time stabilization of unmanned aerial vehicle (UAV) systems with parametric uncertainties. Applying the Lyapunov stability concept and finite-time convergence idea, the recommended control method guarantees that the states of the quad-rotor UAV are converged to the origin with a finite-time convergence rate. Furthermore, an adaptive-tuning scheme is advised to guesstimate the unknown parameters of the quad-rotor UAV at any moment. Finally, simulation results are presented to exhibit the helpfulness of the offered technique compared to the previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments
NASA Astrophysics Data System (ADS)
Makri, Nancy
2017-04-01
The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.
On the POD based reduced order modeling of high Reynolds flows
NASA Astrophysics Data System (ADS)
Behzad, Fariduddin; Helenbrook, Brian; Ahmadi, Goodarz
2012-11-01
Reduced-order modeling (ROM) of a high Reynolds fluid flow using the proper orthogonal decomposition (POD) was studied. Particular attention was given to incompressible, unsteady flow over a two-dimensional NACA0015 airfoil. The Reynolds number is 105 and the angle of attacked of the airfoil is 12°. For DNS solution, hp-finite element method is employed to drive flow samples from which the POD modes are extracted. Particular attention is paid on two issues. First, the stability of POD-ROM resimulation of the turbulent flow is studied. High Reynolds flow contains a lot of fluctuating modes. So, to reach a certain amount of error, more POD modes are needed and the effect of truncation of POD modes is more important. Second, the role of convergence rate on the results of POD. Due to complexity of the flow, convergence of the governing equations is more difficult and the influences of weak convergence appear in the results of POD-ROM. For each issue, the capability of the POD-ROM is assessed in terms of predictions quality of times upon which the POD model was derived. The results are compared with DNS solution and the accuracy and efficiency of different cases are evaluated.
NASA Astrophysics Data System (ADS)
Thébault, Cédric; Doyen, Didier; Routhier, Pierre; Borel, Thierry
2013-03-01
To ensure an immersive, yet comfortable experience, significant work is required during post-production to adapt the stereoscopic 3D (S3D) content to the targeted display and its environment. On the one hand, the content needs to be reconverged using horizontal image translation (HIT) so as to harmonize the depth across the shots. On the other hand, to prevent edge violation, specific re-convergence is required and depending on the viewing conditions floating windows need to be positioned. In order to simplify this time-consuming work we propose a depth grading tool that automatically adapts S3D content to digital cinema or home viewing environments. Based on a disparity map, a stereo point of interest in each shot is automatically evaluated. This point of interest is used for depth matching, i.e. to position the objects of interest of consecutive shots in a same plane so as to reduce visual fatigue. The tool adapts the re-convergence to avoid edge-violation, hyper-convergence and hyper-divergence. Floating windows are also automatically positioned. The method has been tested on various types of S3D content, and the results have been validated by a stereographer.
Bohm, Parker E; Fehlings, Michael G; Kopjar, Branko; Tetreault, Lindsay A; Vaccaro, Alexander R; Anderson, Karen K; Arnold, Paul M
2017-02-01
The timed 30-m walking test (30MWT) is used in clinical practice and in research to objectively quantify gait impairment. The psychometric properties of 30MWT have not yet been rigorously evaluated. This study aimed to determine test-retest reliability, divergent and convergent validity, and responsiveness to change of the 30MWT in patients with degenerative cervical myelopathy (DCM). A retrospective observational study was carried out. The sample consisted of patients with symptomatic DCM enrolled in the AOSpine North America or AOSpine International cervical spondylotic myelopathy studies at 26 sites. Modified Japanese Orthopaedic Association scale (mJOA), Nurick scale, 30MWT, Neck Disability Index (NDI), and Short-Form-36 (SF-36v2) physical component score (PCS) and mental component score (MCS) were the outcome measures. Data from two prospective multicenter cohort myelopathy studies were merged. Each patient was evaluated at baseline and 6 months postoperatively. Of 757 total patients, 682 (90.09%) attempted to perform the 30MWT at baseline. Of these 682 patients, 602 (88.12%) performed the 30MWT at baseline. One patient was excluded, leaving601 in the analysis. At baseline, 81 of 682 (11.88%) patients were unable to perform the test, and their mJOA, NDI, and SF-36v2 PCS scores were lower compared with those who performed the test at baseline. In patients who performed the 30MWT at baseline, there was very high correlation among the three baseline 30MWT measurements (r=0.9569-0.9919). The 30MWT demonstrated good convergent and divergent validity. It was moderately correlated with the Nurick (r=0.4932), mJOA (r=-0.4424), and SF-36v2 PCS (r=-0.3537) (convergent validity) and poorly correlated with the NDI (r=0.2107) and SF-36v2 MCS (r=-0.1984) (divergent validity). Overall, the 30MWT was not responsive to change (standardized response mean [SRM]=0.30). However, for patients who had a baseline time above the median value of 29 seconds, the SRM was 0.45. The 30MWT shows high test-retest reliability and good divergent and convergent validity. It is responsive to change only in patients with more severe myelopathy. The 30MWT is a simple, quick, and affordable test, and should be used as an ancillary test to evaluate gait parameters in patients with DCM. Copyright © 2016 Elsevier Inc. All rights reserved.
The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake
NASA Technical Reports Server (NTRS)
Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.
1986-01-01
The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.
An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Lessard, Victor R.
1990-01-01
The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
Roe, Daniel R; Bergonzo, Christina; Cheatham, Thomas E
2014-04-03
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency.
Convergence tests on tax burden and economic growth among China, Taiwan and the OECD countries
NASA Astrophysics Data System (ADS)
Wang, David Han-Min
2007-07-01
The unfolding globalization has profound impact on a wide range of nations’ policies including tax and economy policies. This study adopts the time series and cluster analyses to examine the convergence property of tax burden and per capita gross domestic product among Taiwan, China and the OECD countries. The empirical results show that there is no significant relationship between the integration process and fiscal convergence among countries. However, the cluster analyses identify that the group of China, Taiwan, and Korea was stably moving toward one model during the 1970s, 1980s and 1990s. And, the convergence of tax burden is found in the group, but no pairwise convergence exists.
Exponential stability preservation in semi-discretisations of BAM networks with nonlinear impulses
NASA Astrophysics Data System (ADS)
Mohamad, Sannay; Gopalsamy, K.
2009-01-01
This paper demonstrates the reliability of a discrete-time analogue in preserving the exponential convergence of a bidirectional associative memory (BAM) network that is subject to nonlinear impulses. The analogue derived from a semi-discretisation technique with the value of the time-step fixed is treated as a discrete-time dynamical system while its exponential convergence towards an equilibrium state is studied. Thereby, a family of sufficiency conditions governing the network parameters and the impulse magnitude and frequency is obtained for the convergence. As special cases, one can obtain from our results, those corresponding to the non-impulsive discrete-time BAM networks and also those corresponding to continuous-time (impulsive and non-impulsive) systems. A relation between the Lyapunov exponent of the non-impulsive system and that of the impulsive system involving the size of the impulses and the inter-impulse intervals is obtained.
The Early Development Instrument: An Examination of Convergent and Discriminant Validity
ERIC Educational Resources Information Center
Hymel, Shelley; LeMare, Lucy; McKee, William
2011-01-01
The convergent and discriminant validity of the Early Development Instrument (EDI), a teacher-rated assessment of children's "school readiness", was investigated in a multicultural sample of 267 kindergarteners (53% male). Teachers evaluations on the EDI, both overall and in five domains (physical health/well-being, social competence,…
Evaluating the Convergence of Muscle Appearance Attitude Measures
ERIC Educational Resources Information Center
Cafri, Guy; Thompson, J. Kevin
2004-01-01
There has been growing interest in the assessment of a muscular appearance. Given the importance of assessing muscle appearance attitudes, the aim of this study was to explore the convergence of the Drive for Muscularity Scale, Somatomorphic Matrix, Contour Drawing Rating Scale, Male Figure Drawings, and the Muscularity Rating Scale. Participants…
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
On the Optimization of Aerospace Plane Ascent Trajectory
NASA Astrophysics Data System (ADS)
Al-Garni, Ahmed; Kassem, Ayman Hamdy
A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
Initial Results of an MDO Method Evaluation Study
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Kodiyalam, Srinivas
1998-01-01
The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to
Pan, Albert C; Weinreich, Thomas M; Piana, Stefano; Shaw, David E
2016-03-08
Molecular dynamics (MD) simulations can describe protein motions in atomic detail, but transitions between protein conformational states sometimes take place on time scales that are infeasible or very expensive to reach by direct simulation. Enhanced sampling methods, the aim of which is to increase the sampling efficiency of MD simulations, have thus been extensively employed. The effectiveness of such methods when applied to complex biological systems like proteins, however, has been difficult to establish because even enhanced sampling simulations of such systems do not typically reach time scales at which convergence is extensive enough to reliably quantify sampling efficiency. Here, we obtain sufficiently converged simulations of three proteins to evaluate the performance of simulated tempering, a member of a widely used class of enhanced sampling methods that use elevated temperature to accelerate sampling. Simulated tempering simulations with individual lengths of up to 100 μs were compared to (previously published) conventional MD simulations with individual lengths of up to 1 ms. With two proteins, BPTI and ubiquitin, we evaluated the efficiency of sampling of conformational states near the native state, and for the third, the villin headpiece, we examined the rate of folding and unfolding. Our comparisons demonstrate that simulated tempering can consistently achieve a substantial sampling speedup of an order of magnitude or more relative to conventional MD.
The Principle of Energetic Consistency: Application to the Shallow-Water Equations
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
If the complete state of the earth's atmosphere (e.g., pressure, temperature, winds and humidity, everywhere throughout the atmosphere) were known at any particular initial time, then solving the equations that govern the dynamical behavior of the atmosphere would give the complete state at all subsequent times. Part of the difficulty of weather prediction is that the governing equations can only be solved approximately, which is what weather prediction models do. But weather forecasts would still be far from perfect even if the equations could be solved exactly, because the atmospheric state is not and cannot be known completely at any initial forecast time. Rather, the initial state for a weather forecast can only be estimated from incomplete observations taken near the initial time, through a process known as data assimilation. Weather prediction models carry out their computations on a grid of points covering the earth's atmosphere. The formulation of these models is guided by a mathematical convergence theory which guarantees that, given the exact initial state, the model solution approaches the exact solution of the governing equations as the computational grid is made more fine. For the data assimilation process, however, there does not yet exist a convergence theory. This book chapter represents an effort to begin establishing a convergence theory for data assimilation methods. The main result, which is called the principle of energetic consistency, provides a necessary condition that a convergent method must satisfy. Current methods violate this principle, as shown in earlier work of the author, and therefore are not convergent. The principle is illustrated by showing how to apply it as a simple test of convergence for proposed methods.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Problems Associated with Grid Convergence of Functionals
NASA Technical Reports Server (NTRS)
Salas, Manuel D.; Atkins, Harld L.
2008-01-01
The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order-property of a scheme are presented. The problem is studied within the context of the inviscid supersonic flow over a blunt body; however, the problem and solutions presented are not unique to this example.
On Problems Associated with Grid Convergence of Functionals
NASA Technical Reports Server (NTRS)
Salas, Manuael D.; Atkins, Harold L
2009-01-01
The current use of functionals to evaluate order-of-convergence of a numerical scheme can lead to incorrect values. The problem comes about because of interplay between the errors from the evaluation of the functional, e.g., quadrature error, and from the numerical scheme discretization. Alternative procedures for deducing the order property of a scheme are presented. The problems are studied within the context of the inviscid supersonic flow over a blunt body; however, the problems and solutions presented are not unique to this example.
Effects of tidal current phase at the junction of two straits
Warner, J.; Schoellhamer, D.; Burau, J.; Schladow, G.
2002-01-01
Estuaries typically have a monotonic increase in salinity from freshwater at the head of the estuary to ocean water at the mouth, creating a consistent direction for the longitudinal baroclinic pressure gradient. However, Mare Island Strait in San Francisco Bay has a local salinity minimum created by the phasing of the currents at the junction of Mare Island and Carquinez Straits. The salinity minimum creates converging baroclinic pressure gradients in Mare Island Strait. Equipment was deployed at four stations in the straits for 6 months from September 1997 to March 1998 to measure tidal variability of velocity, conductivity, temperature, depth, and suspended sediment concentration. Analysis of the measured time series shows that on a tidal time scale in Mare Island Strait, the landward and seaward baroclinic pressure gradients in the local salinity minimum interact with the barotropic gradient, creating regions of enhanced shear in the water column during the flood and reduced shear during the ebb. On a tidally averaged time scale, baroclinic pressure gradients converge on the tidally averaged salinity minimum and drive a converging near-bed and diverging surface current circulation pattern, forming a "baroclinic convergence zone" in Mare Island Strait. Historically large sedimentation rates in this area are attributed to the convergence zone.
Health status convergence at the local level: empirical evidence from Austria
2011-01-01
Introduction Health is an important dimension of welfare comparisons across individuals, regions and states. Particularly from a long-term perspective, within-country convergence of the health status has rarely been investigated by applying methods well established in other scientific fields. In the following paper we study the relation between initial levels of the health status and its improvement at the local community level in Austria in the time period 1969-2004. Methods We use age standardized mortality rates from 2381 Austrian communities as an indicator for the health status and analyze the convergence/divergence of overall mortality for (i) the whole population, (ii) females, (iii) males and (iv) the gender mortality gap. Convergence/Divergence is studied by applying different concepts of cross-regional inequality (weighted standard deviation, coefficient of variation, Theil-Coefficient of inequality). Various econometric techniques (weighted OLS, Quantile Regression, Kendall's Rank Concordance) are used to test for absolute and conditional beta-convergence in mortality. Results Regarding sigma-convergence, we find rather mixed results. While the weighted standard deviation indicates an increase in equality for all four variables, the picture appears less clear when correcting for the decreasing mean in the distribution. However, we find highly significant coefficients for absolute and conditional beta-convergence between the periods. While these results are confirmed by several robustness tests, we also find evidence for the existence of convergence clubs. Conclusions The highly significant beta-convergence across communities might be caused by (i) the efforts to harmonize and centralize the health policy at the federal level in Austria since the 1970s, (ii) the diminishing returns of the input factors in the health production function, which might lead to convergence, as the general conditions (e.g. income, education etc.) improve over time, and (iii) the mobility of people across regions, as people tend to move to regions/communities which exhibit more favorable living conditions. JEL classification: I10, I12, I18 PMID:21864364
Quantification provides a conceptual basis for convergent evolution.
Speed, Michael P; Arbuckle, Kevin
2017-05-01
While much of evolutionary biology attempts to explain the processes of diversification, there is an important place for the study of phenotypic similarity across life forms. When similar phenotypes evolve independently in different lineages this is referred to as convergent evolution. Although long recognised, evolutionary convergence is receiving a resurgence of interest. This is in part because new genomic data sets allow detailed and tractable analysis of the genetic underpinnings of convergent phenotypes, and in part because of renewed recognition that convergence may reflect limitations in the diversification of life. In this review we propose that although convergent evolution itself does not require a new evolutionary framework, none the less there is room to generate a more systematic approach which will enable evaluation of the importance of convergent phenotypes in limiting the diversity of life's forms. We therefore propose that quantification of the frequency and strength of convergence, rather than simply identifying cases of convergence, should be considered central to its systematic comprehension. We provide a non-technical review of existing methods that could be used to measure evolutionary convergence, bringing together a wide range of methods. We then argue that quantification also requires clear specification of the level at which the phenotype is being considered, and argue that the most constrained examples of convergence show similarity both in function and in several layers of underlying form. Finally, we argue that the most important and impressive examples of convergence are those that pertain, in form and function, across a wide diversity of selective contexts as these persist in the likely presence of different selection pressures within the environment. © 2016 The Authors. Biological Reviews published by John Wiley & Sons Ltd on behalf of Cambridge Philosophical Society.
Latif, Rabia; Abbas, Haider; Latif, Seemab; Masood, Ashraf
2016-07-01
Security and privacy are the first and foremost concerns that should be given special attention when dealing with Wireless Body Area Networks (WBANs). As WBAN sensors operate in an unattended environment and carry critical patient health information, Distributed Denial of Service (DDoS) attack is one of the major attacks in WBAN environment that not only exhausts the available resources but also influence the reliability of information being transmitted. This research work is an extension of our previous work in which a machine learning based attack detection algorithm is proposed to detect DDoS attack in WBAN environment. However, in order to avoid complexity, no consideration was given to the traceback mechanism. During traceback, the challenge lies in reconstructing the attack path leading to identify the attack source. Among existing traceback techniques, Probabilistic Packet Marking (PPM) approach is the most commonly used technique in conventional IP- based networks. However, since marking probability assignment has significant effect on both the convergence time and performance of a scheme, it is not directly applicable in WBAN environment due to high convergence time and overhead on intermediate nodes. Therefore, in this paper we have proposed a new scheme called Efficient Traceback Technique (ETT) based on Dynamic Probability Packet Marking (DPPM) approach and uses MAC header in place of IP header. Instead of using fixed marking probability, the proposed scheme uses variable marking probability based on the number of hops travelled by a packet to reach the target node. Finally, path reconstruction algorithms are proposed to traceback an attacker. Evaluation and simulation results indicate that the proposed solution outperforms fixed PPM in terms of convergence time and computational overhead on nodes.
Nonlinear convergence active vibration absorber for single and multiple frequency vibration control
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang
2017-12-01
This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.
Evolution and convergence of the patterns of international scientific collaboration.
Coccia, Mario; Wang, Lili
2016-02-23
International research collaboration plays an important role in the social construction and evolution of science. Studies of science increasingly analyze international collaboration across multiple organizations for its impetus in improving research quality, advancing efficiency of the scientific production, and fostering breakthroughs in a shorter time. However, long-run patterns of international research collaboration across scientific fields and their structural changes over time are hardly known. Here we show the convergence of international scientific collaboration across research fields over time. Our study uses a dataset by the National Science Foundation and computes the fraction of papers that have international institutional coauthorships for various fields of science. We compare our results with pioneering studies carried out in the 1970s and 1990s by applying a standardization method that transforms all fractions of internationally coauthored papers into a comparable framework. We find, over 1973-2012, that the evolution of collaboration patterns across scientific disciplines seems to generate a convergence between applied and basic sciences. We also show that the general architecture of international scientific collaboration, based on the ranking of fractions of international coauthorships for different scientific fields per year, has tended to be unchanged over time, at least until now. Overall, this study shows, to our knowledge for the first time, the evolution of the patterns of international scientific collaboration starting from initial results described by literature in the 1970s and 1990s. We find a convergence of these long-run collaboration patterns between the applied and basic sciences. This convergence might be one of contributing factors that supports the evolution of modern scientific fields.
Computation of type curves for flow to partially penetrating wells in water-table aquifers
Moench, Allen F.
1993-01-01
Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1988-01-01
An abstract approximation and convergence theory for the closed-loop solution of discrete-time linear-quadratic regulator problems for parabolic systems with unbounded input is developed. Under relatively mild stabilizability and detectability assumptions, functional analytic, operator techniques are used to demonstrate the norm convergence of Galerkin-based approximations to the optimal feedback control gains. The application of the general theory to a class of abstract boundary control systems is considered. Two examples, one involving the Neumann boundary control of a one-dimensional heat equation, and the other, the vibration control of a cantilevered viscoelastic beam via shear input at the free end, are discussed.
Convergence and rate analysis of neural networks for sparse approximation.
Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J
2012-09-01
We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
Convergence Time and Phase Transition in a Non-monotonic Family of Probabilistic Cellular Automata
NASA Astrophysics Data System (ADS)
Ramos, A. D.; Leite, A.
2017-08-01
In dynamical systems, some of the most important questions are related to phase transitions and convergence time. We consider a one-dimensional probabilistic cellular automaton where their components assume two possible states, zero and one, and interact with their two nearest neighbors at each time step. Under the local interaction, if the component is in the same state as its two neighbors, it does not change its state. In the other cases, a component in state zero turns into a one with probability α , and a component in state one turns into a zero with probability 1-β . For certain values of α and β , we show that the process will always converge weakly to δ 0, the measure concentrated on the configuration where all the components are zeros. Moreover, the mean time of this convergence is finite, and we describe an upper bound in this case, which is a linear function of the initial distribution. We also demonstrate an application of our results to the percolation PCA. Finally, we use mean-field approximation and Monte Carlo simulations to show coexistence of three distinct behaviours for some values of parameters α and β.
NASA Astrophysics Data System (ADS)
Wang, Xiaowei; Li, Huiping; Li, Zhichao
2018-04-01
The interfacial heat transfer coefficient (IHTC) is one of the most important thermal physical parameters which have significant effects on the calculation accuracy of physical fields in the numerical simulation. In this study, the artificial fish swarm algorithm (AFSA) was used to evaluate the IHTC between the heated sample and the quenchant in a one-dimensional heat conduction problem. AFSA is a global optimization method. In order to speed up the convergence speed, a hybrid method which is the combination of AFSA and normal distribution method (ZAFSA) was presented. The IHTC evaluated by ZAFSA were compared with those attained by AFSA and the advanced-retreat method and golden section method. The results show that the reasonable IHTC is obtained by using ZAFSA, the convergence of hybrid method is well. The algorithm based on ZAFSA can not only accelerate the convergence speed, but also reduce the numerical oscillation in the evaluation of IHTC.
NASA Technical Reports Server (NTRS)
Iannicca, Dennis; Hylton, Alan; Ishac, Joseph
2012-01-01
Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.
Does healthcare financing converge? Evidence from eight OECD countries.
Chen, Wen-Yi
2013-12-01
This study investigated the convergence of healthcare financing across eight OECD countries during 1960-2009 for the first time. The panel stationary test incorporating both shapes of multiple structural breaks (i.e., sharp drifts and smooth transition shifts) and cross-sectional dependence was used to provide reliable evidence of convergence in healthcare financing. Our results suggested that the public share of total healthcare financing in eight OECD countries has exhibited signs of convergence towards that of the US. The convergence of healthcare financing not only reflected a decline in the share of public healthcare financing in these eight OECD countries but also exhibited an upward trend in the share of public healthcare financing in the US over the period of 1960-2009.
Davis, Barbara A; Kiesel, Cynthia K; McFarland, Julie; Collard, Adressa; Coston, Kyle; Keeton, Ada
2005-01-01
Having reliable and valid instruments is a necessity for nurses and others measuring concepts such as patient satisfaction. The purpose of this article is to describe the use of convergence to test the construct validity of the Davis Consumer Emergency Care Satisfaction Scale (CECSS). Results indicate convergence of the CECSS with the Risser Patient Satisfaction Scale and 2 single-item visual analogue scales, therefore supporting construct validity. Persons measuring patient satisfaction with nurse behaviors in the emergency department can confidently use the CECSS.
Validation of the Social Appearance Anxiety Scale: Factor, Convergent, and Divergent Validity
ERIC Educational Resources Information Center
Levinson, Cheri A.; Rodebaugh, Thomas L.
2011-01-01
The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor,…
Finite element concepts in computational aerodynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Dale A.
This model description is supplemental to the Lawrence Livermore National Laboratory (LLNL) report LLNL-TR-642494, Technoeconomic Evaluation of MEA versus Mixed Amines for CO2 Removal at Near- Commercial Scale at Duke Energy Gibson 3 Plant. We describe the assumptions and methodology used in the Laboratory’s simulation of its understanding of Huaneng’s novel amine solvent for CO2 capture with 35% mixed amine. The results of that simulation have been described in LLNL-TR-642494. The simulation was performed using ASPEN 7.0. The composition of the Huaneng’s novel amine solvent was estimated based on information gleaned from Huaneng patents. The chemistry of the process wasmore » described using nine equations, representing reactions within the absorber and stripper columns using the ELECTNRTL property method. As a rate-based ASPEN simulation model was not available to Lawrence Livermore at the time of writing, the height of a theoretical plate was estimated using open literature for similar processes. Composition of the flue gas was estimated based on information supplied by Duke Energy for Unit 3 of the Gibson plant. The simulation was scaled at one million short tons of CO2 absorbed per year. To aid stability of the model, convergence of the main solvent recycle loop was implemented manually, as described in the Blocks section below. Automatic convergence of this loop led to instability during the model iterations. Manual convergence of the loop enabled accurate representation and maintenance of model stability.« less
NASA Astrophysics Data System (ADS)
Yokoyama, Yoshiaki; Kim, Minseok; Arai, Hiroyuki
At present, when using space-time processing techniques with multiple antennas for mobile radio communication, real-time weight adaptation is necessary. Due to the progress of integrated circuit technology, dedicated processor implementation with ASIC or FPGA can be employed to implement various wireless applications. This paper presents a resource and performance evaluation of the QRD-RLS systolic array processor based on fixed-point CORDIC algorithm with FPGA. In this paper, to save hardware resources, we propose the shared architecture of a complex CORDIC processor. The required precision of internal calculation, the circuit area for the number of antenna elements and wordlength, and the processing speed will be evaluated. The resource estimation provides a possible processor configuration with a current FPGA on the market. Computer simulations assuming a fading channel will show a fast convergence property with a finite number of training symbols. The proposed architecture has also been implemented and its operation was verified by beamforming evaluation through a radio propagation experiment.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
NASA Astrophysics Data System (ADS)
Somoza, R.
1998-05-01
Recently published seafloor data around the Antarctica plate boundaries, as well as calibration of the Cenozoic Magnetic Polarity Time Scale, allow a reevaluation of the Nazca (Farallon)-South America relative convergence kinematics since late Middle Eocene time. The new reconstruction parameters confirm the basic characteristics determined in previous studies. However, two features are notable in the present data set: a strong increase in convergence rate in Late Oligocene time, and a slowdown during Late Miocene time. The former is coeval with the early development of important tectonic characteristics of the present Central Andes, such as compressional failure in wide areas of the region, and the establishment of Late Cenozoic magmatism. This supports the idea that a relationship exists between strong acceleration of convergence and mountain building in the Central Andean region.
Guihéneuf, N; Bour, O; Boisson, A; Le Borgne, T; Becker, M W; Nigon, B; Wajiduddin, M; Ahmed, S; Maréchal, J-C
2017-11-01
In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4m to 41m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from -3 to -2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Guihéneuf, N.; Bour, O.; Boisson, A.; Le Borgne, T.; Becker, M. W.; Nigon, B.; Wajiduddin, M.; Ahmed, S.; Maréchal, J.-C.
2017-11-01
In fractured media, solute transport is controlled by advection in open and connected fractures and by matrix diffusion that may be enhanced by chemical weathering of the fracture walls. These phenomena may lead to non-Fickian dispersion characterized by early tracer arrival time, late-time tailing on the breakthrough curves and potential scale effect on transport processes. Here we investigate the scale dependency of these processes by analyzing a series of convergent and push-pull tracer experiments with distance of investigation ranging from 4 m to 41 m in shallow fractured granite. The small and intermediate distances convergent experiments display a non-Fickian tailing, characterized by a -2 power law slope. However, the largest distance experiment does not display a clear power law behavior and indicates possibly two main pathways. The push-pull experiments show breakthrough curve tailing decreases as the volume of investigation increases, with a power law slope ranging from - 3 to - 2.3 from the smallest to the largest volume. The multipath model developed by Becker and Shapiro (2003) is used here to evaluate the hypothesis of the independence of flow pathways. The multipath model is found to explain the convergent data, when increasing local dispersivity and reducing the number of pathways with distance which suggest a transition from non-Fickian to Fickian transport at fracture scale. However, this model predicts an increase of tailing with push-pull distance, while the experiments show the opposite trend. This inconsistency may suggest the activation of cross channel mass transfer at larger volume of investigation, which leads to non-reversible heterogeneous advection with scale. This transition from independent channels to connected channels when the volume of investigation increases suggest that both convergent and push-pull breakthrough curves can inform the existence of characteristic length scales.
Microprocessor realizations of range rate filters
NASA Technical Reports Server (NTRS)
1979-01-01
The performance of five digital range rate filters is evaluated. A range rate filter receives an input of range data from a radar unit and produces an output of smoothed range data and its estimated derivative range rate. The filters are compared through simulation on an IBM 370. Two of the filter designs are implemented on a 6800 microprocessor-based system. Comparisons are made on the bases of noise variance reduction ratios and convergence times of the filters in response to simulated range signals.
Başkent, Deniz; Eiler, Cheryl L; Edwards, Brent
2007-06-01
To present a comprehensive analysis of the feasibility of genetic algorithms (GA) for finding the best fit of hearing aids or cochlear implants for individual users in clinical or research settings, where the algorithm is solely driven by subjective human input. Due to varying pathology, the best settings of an auditory device differ for each user. It is also likely that listening preferences vary at the same time. The settings of a device customized for a particular user can only be evaluated by the user. When optimization algorithms are used for fitting purposes, this situation poses a difficulty for a systematic and quantitative evaluation of the suitability of the fitting parameters produced by the algorithm. In the present study, an artificial listening environment was generated by distorting speech using a noiseband vocoder. The settings produced by the GA for this listening problem could objectively be evaluated by measuring speech recognition and comparing the performance to the best vocoder condition where speech was least distorted. Nine normal-hearing subjects participated in the study. The parameters to be optimized were the number of vocoder channels, the shift between the input frequency range and the synthesis frequency range, and the compression-expansion of the input frequency range over the synthesis frequency range. The subjects listened to pairs of sentences processed with the vocoder, and entered a preference for the sentence with better intelligibility. The GA modified the solutions iteratively according to the subject preferences. The program converged when the user ranked the same set of parameters as the best in three consecutive steps. The results produced by the GA were analyzed for quality by measuring speech intelligibility, for test-retest reliability by running the GA three times with each subject, and for convergence properties. Speech recognition scores averaged across subjects were similar for the best vocoder solution and for the solutions produced by the GA. The average number of iterations was 8 and the average convergence time was 25.5 minutes. The settings produced by different GA runs for the same subject were slightly different; however, speech recognition scores measured with these settings were similar. Individual data from subjects showed that in each run, a small number of GA solutions produced poorer speech intelligibility than for the best setting. This was probably a result of the combination of the inherent randomness of the GA, the convergence criterion used in the present study, and possible errors that the users might have made during the paired comparisons. On the other hand, the effect of these errors was probably small compared to the other two factors, as a comparison between subjective preferences and objective measures showed that for many subjects the two were in good agreement. The results showed that the GA was able to produce good solutions by using listener preferences in a relatively short time. For practical applications, the program can be made more robust by running the GA twice or by not using an automatic stopping criterion, and it can be made faster by optimizing the number of the paired comparisons completed in each iteration.
A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction
2010-08-09
more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several
Thermophysical modelling for high-resolution digital terrain models
NASA Astrophysics Data System (ADS)
Pelivan, I.
2018-07-01
A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavourable illumination conditions such as little-to-no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment, and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disc-integrated and disc-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.
Impact of Advance Rate on Entrapment Risk of a Double-Shielded TBM in Squeezing Ground
NASA Astrophysics Data System (ADS)
Hasanpour, Rohola; Rostami, Jamal; Barla, Giovanni
2015-05-01
Shielded tunnel boring machines (TBMs) can get stuck in squeezing ground due to excessive tunnel convergence under high in situ stress. This typically coincides with extended machine stoppages, when the ground has sufficient time to undergo substantial displacements. Excessive convergence of the ground beyond the designated overboring means ground pressure against the shield and high shield frictional resistance that, in some cases, cannot be overcome by the TBM thrust system. This leads to machine entrapment in the ground, which causes significant delays and requires labor-intensive and risky operations of manual excavation to release the machine. To evaluate the impact of the time factor on the possibility of machine entrapment, a comprehensive 3D finite difference simulation of a double-shielded TBM in squeezing ground was performed. The modeling allowed for observation of the impact of the tunnel advance rate on the possibility of machine entrapment in squeezing ground. For this purpose, the model included rock mass properties related to creep in severe squeezing conditions. This paper offers an overview of the modeling results for a given set of rock mass and TBM parameters, as well as lining characteristics, including the magnitude of displacement and contact forces on shields and ground pressure on segmental lining versus time for different advance rates.
Thermophysical modeling for high-resolution digital terrain models
NASA Astrophysics Data System (ADS)
Pelivan, I.
2018-04-01
A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.
Validity of three clinical performance assessments of internal medicine clerks.
Hull, A L; Hodder, S; Berger, B; Ginsberg, D; Lindheim, N; Quan, J; Kleinhenz, M E
1995-06-01
To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks. A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991. There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified. There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.
Kermajani, Hamidreza; Gomez, Carles
2014-01-01
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs. PMID:25004154
Kermajani, Hamidreza; Gomez, Carles
2014-07-07
The IPv6 Routing Protocol for Low-power and Lossy Networks (RPL) has been recently developed by the Internet Engineering Task Force (IETF). Given its crucial role in enabling the Internet of Things, a significant amount of research effort has already been devoted to RPL. However, the RPL network convergence process has not yet been investigated in detail. In this paper we study the influence of the main RPL parameters and mechanisms on the network convergence process of this protocol in IEEE 802.15.4 multihop networks. We also propose and evaluate a mechanism that leverages an option available in RPL for accelerating the network convergence process. We carry out extensive simulations for a wide range of conditions, considering different network scenarios in terms of size and density. Results show that network convergence performance depends dramatically on the use and adequate configuration of key RPL parameters and mechanisms. The findings and contributions of this work provide a RPL configuration guideline for network convergence performance tuning, as well as a characterization of the related performance trade-offs.
New Radio and Optical Expansion Rate Measurements of the Crab Nebula
NASA Astrophysics Data System (ADS)
Bietenholz, M. F.; Nugent, R. L.
2016-06-01
We present new JVLA radio observations of the Crab nebula, which we use, along with older observations taken over a ~30 yr period, to determined the expansion rate of the synchrotron nebula. We find a convergence date for the radio synchrotron nebula of AD 1255 +/- 27. We also re-evaluated the expansion rate of the optical line emitting filaments, and we show that the traditional estimates of their convergence date are slightly biased. We find an un-biased convergence date of AD 1091 +/- 34, ~40 yr earlier than previous estimates. Our results show that both the synchrotron nebula and the optical line-emitting filaments have been accelerated since the explosion in AD 1054, but former more strongly than the latter. This finding supports the picture that the filaments are the result of the Rayleigh-Taylor instability at the interface between the pulsar-wind nebula and the surrounding freely-expanding supernova ejecta, and rules out models where the pulsar wind bubble is interacting directly with the pre-supernova wind of the Crab's progenitor. Our new observations were taken ~2 months after the gamma-ray flare of 2012 July, and also allow us to put a sensitive limit on any radio emission associated with the flare of <0.0002 times the radio luminosity that of the nebula.
Sketching Designs Using the Five Design-Sheet Methodology.
Roberts, Jonathan C; Headleand, Chris; Ritsos, Panagiotis D
2016-01-01
Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.
Convergence Acceleration of Runge-Kutta Schemes for Solving the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Swanson, Roy C., Jr.; Turkel, Eli; Rossow, C.-C.
2007-01-01
The convergence of a Runge-Kutta (RK) scheme with multigrid is accelerated by preconditioning with a fully implicit operator. With the extended stability of the Runge-Kutta scheme, CFL numbers as high as 1000 can be used. The implicit preconditioner addresses the stiffness in the discrete equations associated with stretched meshes. This RK/implicit scheme is used as a smoother for multigrid. Fourier analysis is applied to determine damping properties. Numerical dissipation operators based on the Roe scheme, a matrix dissipation, and the CUSP scheme are considered in evaluating the RK/implicit scheme. In addition, the effect of the number of RK stages is examined. Both the numerical and computational efficiency of the scheme with the different dissipation operators are discussed. The RK/implicit scheme is used to solve the two-dimensional (2-D) and three-dimensional (3-D) compressible, Reynolds-averaged Navier-Stokes equations. Turbulent flows over an airfoil and wing at subsonic and transonic conditions are computed. The effects of the cell aspect ratio on convergence are investigated for Reynolds numbers between 5:7 x 10(exp 6) and 100 x 10(exp 6). It is demonstrated that the implicit preconditioner can reduce the computational time of a well-tuned standard RK scheme by a factor between four and ten.
Modified dwell time optimization model and its applications in subaperture polishing.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-05-20
The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200 mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920 mm Zerodur paraboloid and Φ100 mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8 nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1990-01-01
Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.
Acceleration of Convergence to Equilibrium in Markov Chains by Breaking Detailed Balance
NASA Astrophysics Data System (ADS)
Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes
2017-07-01
We analyse and interpret the effects of breaking detailed balance on the convergence to equilibrium of conservative interacting particle systems and their hydrodynamic scaling limits. For finite systems of interacting particles, we review existing results showing that irreversible processes converge faster to their steady state than reversible ones. We show how this behaviour appears in the hydrodynamic limit of such processes, as described by macroscopic fluctuation theory, and we provide a quantitative expression for the acceleration of convergence in this setting. We give a geometrical interpretation of this acceleration, in terms of currents that are antisymmetric under time-reversal and orthogonal to the free energy gradient, which act to drive the system away from states where (reversible) gradient-descent dynamics result in slow convergence to equilibrium.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Abduljabbar, Adel Salah; Abu-Hilal, Maher M.; Morin, Alexandre J. S.; Abdelfattah, Faisal; Leung, Kim Chau; Xu, Man K.; Nagengast, Benjamin; Parker, Philip
2013-01-01
For the international Trends in International Mathematics and Science Study (TIMSS2007) math and science motivation scales (self-concept, positive affect, and value), we evaluated the psychometric properties (factor structure, method effects, gender differences, and convergent and discriminant validity) in 4 Arab-speaking countries (Saudi Arabia,…
Influence of Multidimensionality on Convergence of Sampling in Protein Simulation
NASA Astrophysics Data System (ADS)
Metsugi, Shoichi
2005-06-01
We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.
Kaslow, David C; Kalil, Jorge; Bloom, David; Breghi, Gianluca; Colucci, Anna Maria; De Gregorio, Ennio; Madhavan, Guru; Meier, Genevieve; Seabrook, Richard; Xu, Xiaoning
2017-01-20
On 17 and 18 July 2015, a meeting in Siena jointly sponsored by ADITEC and GlaxoSmithKline (GSK) was held to review the goals of the Global Health 2035 Grand Convergence, to discuss current vaccine evaluation methods, and to determine the feasibility of reaching consensus on an assessment framework for comprehensively and accurately capturing the full benefits of vaccines. Through lectures and workshops, participants reached a consensus that Multi-Criteria-Decision-Analysis is a method suited to systematically account for the many variables needed to evaluate the broad benefits of vaccination, which include not only health system savings, but also societal benefits, including benefits to the family and increased productivity. Participants also agreed on a set of "core values" to be used in future assessments of vaccines for development and introduction. These values include measures of vaccine efficacy and safety, incident cases prevented per year, the results of cost-benefit analyses, preventable mortality, and the severity of the target disease. Agreement on this set of core assessment parameters has the potential to increase alignment between manufacturers, public health agencies, non-governmental organizations (NGOs), and policy makers (see Global Health 2035 Mission Grand Convergence [1]). The following sections capture the deliberations of a workshop (Working Group 4) chartered to: (1) review the list of 24 parameters selected from SMART vaccines (see the companion papers by Timmis et al. and Madhavan et al., respectively) to determine which represent factors (see Table 1) that should be taken into account when evaluating the role of vaccines in maximizing the success of the Global Health 2035 Grand Convergence; (2) develop 3-5 "core values" that should be taken into account when evaluating vaccines at various stages of development; and (3) determine how vaccines can best contribute to the Global Health 2035 Grand Convergence effort. Copyright © 2016.
Variable convergence liquid layer implosions on the National Ignition Facility
NASA Astrophysics Data System (ADS)
Zylstra, A. B.; Yi, S. A.; Haines, B. M.; Olson, R. E.; Leeper, R. J.; Braun, T.; Biener, J.; Kline, J. L.; Batha, S. H.; Berzak Hopkins, L.; Bhandarkar, S.; Bradley, P. A.; Crippen, J.; Farrell, M.; Fittinghoff, D.; Herrmann, H. W.; Huang, H.; Khan, S.; Kong, C.; Kozioziemski, B. J.; Kyrala, G. A.; Ma, T.; Meezan, N. B.; Merrill, F.; Nikroo, A.; Peterson, R. R.; Rice, N.; Sater, J. D.; Shah, R. C.; Stadermann, M.; Volegov, P.; Walters, C.; Wilson, D. C.
2018-05-01
Liquid layer implosions using the "wetted foam" technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. We report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ˜ 12, but we observe a significant discrepancy at CR ˜ 20. This may be due to suppressed hot-spot formation or an anomalous energy loss mechanism.
Yokoyama, Ken Daigoro; Pollock, David D
2012-01-01
Functional modification of regulatory proteins can affect hundreds of genes throughout the genome, and is therefore thought to be almost universally deleterious. This belief, however, has recently been challenged. A potential example comes from transcription factor SP1, for which statistical evidence indicates that motif preferences were altered in eutherian mammals. Here, we set out to discover possible structural and theoretical explanations, evaluate the role of selection in SP1 evolution, and discover effects on coregulatory proteins. We show that SP1 motif preferences were convergently altered in birds as well as mammals, inducing coevolutionary changes in over 800 regulatory regions. Structural and phylogenic evidence implicates a single causative amino acid replacement at the same SP1 position along both lineages. Furthermore, paralogs SP3 and SP4, which coregulate SP1 target genes through competitive binding to the same sites, have accumulated convergent replacements at the homologous position multiple times during eutherian and bird evolution, presumably to preserve competitive binding. To determine plausibility, we developed and implemented a simple model of transcription factor and binding site coevolution. This model predicts that, in contrast to prevailing beliefs, even small selective benefits per locus can drive concurrent fixation of transcription factor and binding site mutants under a broad range of conditions. Novel binding sites tend to arise de novo, rather than by mutation from ancestral sites, a prediction substantiated by SP1-binding site alignments. Thus, multiple lines of evidence indicate that selection has driven convergent evolution of transcription factors along with their binding sites and coregulatory proteins.
Yokoyama, Ken Daigoro; Pollock, David D.
2012-01-01
Functional modification of regulatory proteins can affect hundreds of genes throughout the genome, and is therefore thought to be almost universally deleterious. This belief, however, has recently been challenged. A potential example comes from transcription factor SP1, for which statistical evidence indicates that motif preferences were altered in eutherian mammals. Here, we set out to discover possible structural and theoretical explanations, evaluate the role of selection in SP1 evolution, and discover effects on coregulatory proteins. We show that SP1 motif preferences were convergently altered in birds as well as mammals, inducing coevolutionary changes in over 800 regulatory regions. Structural and phylogenic evidence implicates a single causative amino acid replacement at the same SP1 position along both lineages. Furthermore, paralogs SP3 and SP4, which coregulate SP1 target genes through competitive binding to the same sites, have accumulated convergent replacements at the homologous position multiple times during eutherian and bird evolution, presumably to preserve competitive binding. To determine plausibility, we developed and implemented a simple model of transcription factor and binding site coevolution. This model predicts that, in contrast to prevailing beliefs, even small selective benefits per locus can drive concurrent fixation of transcription factor and binding site mutants under a broad range of conditions. Novel binding sites tend to arise de novo, rather than by mutation from ancestral sites, a prediction substantiated by SP1-binding site alignments. Thus, multiple lines of evidence indicate that selection has driven convergent evolution of transcription factors along with their binding sites and coregulatory proteins. PMID:23019068
2015-01-01
Many problems studied via molecular dynamics require accurate estimates of various thermodynamic properties, such as the free energies of different states of a system, which in turn requires well-converged sampling of the ensemble of possible structures. Enhanced sampling techniques are often applied to provide faster convergence than is possible with traditional molecular dynamics simulations. Hamiltonian replica exchange molecular dynamics (H-REMD) is a particularly attractive method, as it allows the incorporation of a variety of enhanced sampling techniques through modifications to the various Hamiltonians. In this work, we study the enhanced sampling of the RNA tetranucleotide r(GACC) provided by H-REMD combined with accelerated molecular dynamics (aMD), where a boosting potential is applied to torsions, and compare this to the enhanced sampling provided by H-REMD in which torsion potential barrier heights are scaled down to lower force constants. We show that H-REMD and multidimensional REMD (M-REMD) combined with aMD does indeed enhance sampling for r(GACC), and that the addition of the temperature dimension in the M-REMD simulations is necessary to efficiently sample rare conformations. Interestingly, we find that the rate of convergence can be improved in a single H-REMD dimension by simply increasing the number of replicas from 8 to 24 without increasing the maximum level of bias. The results also indicate that factors beyond replica spacing, such as round trip times and time spent at each replica, must be considered in order to achieve optimal sampling efficiency. PMID:24625009
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics.
Martínez, Enrique; Cawkwell, Marc J; Voter, Arthur F; Niklasson, Anders M N
2015-04-21
Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached at each time step. The thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
van der Boon, A.; van Hinsbergen, D. J. J.; Rezaeian, M.; Gürer, D.; Honarmand, M.; Pastor-Galán, D.; Krijgsman, W.; Langereis, C. G.
2018-01-01
Since the late Eocene, convergence and subsequent collision between Arabia and Eurasia was accommodated both in the overriding Eurasian plate forming the Greater Caucasus orogen and the Iranian plateau, and by subduction and accretion of the Neotethys and Arabian margin forming the East Anatolian plateau and the Zagros. To quantify how much Arabia-Eurasia convergence was accommodated in the Greater Caucasus region, we here provide new paleomagnetic results from 97 volcanic sites (∼500 samples) in the Talysh Mountains of NW Iran, that show ∼15° net clockwise rotation relative to Eurasia since the Eocene. We apply a first-order kinematic restoration of the northward convex orocline that formed to the south of the Greater Caucasus, integrating our new data with previously published constraints on rotations of the Eastern Pontides and Lesser Caucasus. This suggests that north of the Talysh ∼120 km of convergence must have been accommodated. North of the Eastern Pontides and Lesser Caucasus this is significantly more: 200-280 km. Our reconstruction independently confirms previous Caucasus convergence estimates. Moreover, we show for the first time a sharp contrast of convergence between the Lesser Caucasus and the Talysh. This implies that the ancient Paleozoic-Mesozoic transform plate boundary, preserved between the Iranian and East-Anatolian plateaus, was likely reactivated as a right-lateral transform fault since late Eocene time.
NASA Astrophysics Data System (ADS)
Niroula, Sundar; Halder, Subhadeep; Ghosh, Subimal
2018-06-01
Real time hydrologic forecasting requires near accurate initial condition of soil moisture; however, continuous monitoring of soil moisture is not operational in many regions, such as, in Ganga basin, extended in Nepal, India and Bangladesh. Here, we examine the impacts of perturbation/error in the initial soil moisture conditions on simulated soil moisture and streamflow in Ganga basin and its propagation, during the summer monsoon season (June to September). This provides information regarding the required minimum duration of model simulation for attaining the model stability. We use the Variable Infiltration Capacity model for hydrological simulations after validation. Multiple hydrologic simulations are performed, each of 21 days, initialized on every 5th day of the monsoon season for deficit, surplus and normal monsoon years. Each of these simulations is performed with the initial soil moisture condition obtained from long term runs along with positive and negative perturbations. The time required for the convergence of initial errors is obtained for all the cases. We find a quick convergence for the year with high rainfall as well as for the wet spells within a season. We further find high spatial variations in the time required for convergence; the region with high precipitation such as Lower Ganga basin attains convergence at a faster rate. Furthermore, deeper soil layers need more time for convergence. Our analysis is the first attempt on understanding the sensitivity of hydrological simulations of Ganga basin on initial soil moisture conditions. The results obtained here may be useful in understanding the spin-up requirements for operational hydrologic forecasts.
Ask, Helga; Rognmo, Kamilla; Torvik, Fartein Ask; Røysamb, Espen; Tambs, Kristian
2012-05-01
Spouses tend to have similar lifestyles. We explored the degree to which spouse similarity in alcohol use, smoking, and physical exercise is caused by non-random mating or convergence. We used data collected for the Nord-Trøndelag Health Study from 1984 to 1986 and prospective registry information about when and with whom people entered marriage/cohabitation between 1970 and 2000. Our sample included 19,599 married/cohabitating couples and 1,551 future couples that were to marry/cohabitate in the 14-16 years following data collection. All couples were grouped according to the duration between data collection and entering into marriage/cohabitation. Age-adjusted polychoric spouse correlations were used as the dependent variables in non-linear segmented regression analysis; the independent variable was time. The results indicate that spouse concordance in lifestyle is due to both non-random mating and convergence. Non-random mating appeared to be strongest for smoking. Convergence in alcohol use and smoking was evident during the period prior to marriage/cohabitation, whereas convergence in exercise was evident throughout life. Reduced spouse similarity in smoking with relationship duration may reflect secular trends.
Boot, Nathalie; Baas, Matthijs; Mühlfeld, Elisabeth; de Dreu, Carsten K W; van Gaal, Simon
2017-09-01
Critical to creative cognition and performance is both the generation of multiple alternative solutions in response to open-ended problems (divergent thinking) and a series of cognitive operations that converges on the correct or best possible answer (convergent thinking). Although the neural underpinnings of divergent and convergent thinking are still poorly understood, several electroencephalography (EEG) studies point to differences in alpha-band oscillations between these thinking modes. We reason that, because most previous studies employed typical block designs, these pioneering findings may mainly reflect the more sustained aspects of creative processes that extend over longer time periods, and that still much is unknown about the faster-acting neural mechanisms that dissociate divergent from convergent thinking during idea generation. To this end, we developed a new event-related paradigm, in which we measured participants' tendency to implicitly follow a rule set by examples, versus breaking that rule, during the generation of novel names for specific categories (e.g., pasta, planets). This approach allowed us to compare the oscillatory dynamics of rule convergent and rule divergent idea generation and at the same time enabled us to measure spontaneous switching between these thinking modes on a trial-to-trial basis. We found that, relative to more systematic, rule convergent thinking, rule divergent thinking was associated with widespread decreases in delta band activity. Therefore, this study contributes to advancing our understanding of the neural underpinnings of creativity by addressing some methodological challenges that neuroscientific creativity research faces. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jammazi, Chaker
2009-03-05
The paper gives Lyapunov type sufficient conditions for partial finite-time and asymptotic stability in which some state variables converge to zero while the rest converge to constant values that possibly depend on the initial conditions. The paper then presents partially asymptotically stabilizing controllers for many nonlinear control systems for which continuous asymptotically stabilizing (in the usual sense) controllers are known not to exist.
Finite-time containment control of perturbed multi-agent systems based on sliding-mode control
NASA Astrophysics Data System (ADS)
Yu, Di; Ji, Xiang Yang
2018-01-01
Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo; ...
2017-09-01
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Ravi G.; Desjardins, Olivier; Kong, Bo
Here, we present a verification study of three simulation techniques for fluid–particle flows, including an Euler–Lagrange approach (EL) inspired by Jackson's seminal work on fluidized particles, a quadrature–based moment method based on the anisotropic Gaussian closure (AG), and the traditional two-fluid model. We perform simulations of two problems: particles in frozen homogeneous isotropic turbulence (HIT) and cluster-induced turbulence (CIT). For verification, we evaluate various techniques for extracting statistics from EL and study the convergence properties of the three methods under grid refinement. The convergence is found to depend on the simulation method and on the problem, with CIT simulations posingmore » fewer difficulties than HIT. Specifically, EL converges under refinement for both HIT and CIT, but statistics exhibit dependence on the postprocessing parameters. For CIT, AG produces similar results to EL. For HIT, converging both TFM and AG poses challenges. Overall, extracting converged, parameter-independent Eulerian statistics remains a challenge for all methods.« less
Genome-wide signatures of convergent evolution in echolocating mammals
Parker, Joe; Tsagkogeorga, Georgia; Cotton, James A.; Liu, Yuan; Provero, Paolo; Stupka, Elia; Rossiter, Stephen J.
2013-01-01
Evolution is typically thought to proceed through divergence of genes, proteins, and ultimately phenotypes1-3. However, similar traits might also evolve convergently in unrelated taxa due to similar selection pressures4,5. Adaptive phenotypic convergence is widespread in nature, and recent results from a handful of genes have suggested that this phenomenon is powerful enough to also drive recurrent evolution at the sequence level6-9. Where homoplasious substitutions do occur these have long been considered the result of neutral processes. However, recent studies have demonstrated that adaptive convergent sequence evolution can be detected in vertebrates using statistical methods that model parallel evolution9,10 although the extent to which sequence convergence between genera occurs across genomes is unknown. Here we analyse genomic sequence data in mammals that have independently evolved echolocation and show for the first time that convergence is not a rare process restricted to a handful of loci but is instead widespread, continuously distributed and commonly driven by natural selection acting on a small number of sites per locus. Systematic analyses of convergent sequence evolution in 805,053 amino acids within 2,326 orthologous coding gene sequences compared across 22 mammals (including four new bat genomes) revealed signatures consistent with convergence in nearly 200 loci. Strong and significant support for convergence among bats and the dolphin was seen in numerous genes linked to hearing or deafness, consistent with an involvement in echolocation. Surprisingly we also found convergence in many genes linked to vision: the convergent signal of many sensory genes was robustly correlated with the strength of natural selection. This first attempt to detect genome-wide convergent sequence evolution across divergent taxa reveals the phenomenon to be much more pervasive than previously recognised. PMID:24005325
ERIC Educational Resources Information Center
Nelson, Jason M.; Canivez, Gary L.
2012-01-01
Empirical examination of the Reynolds Intellectual Assessment Scales (RIAS; C. R. Reynolds & R. W. Kamphaus, 2003a) has produced mixed results regarding its internal structure and convergent validity. Various aspects of validity of RIAS scores with a sample (N = 521) of adolescents and adults seeking psychological evaluations at a university-based…
The solution of the point kinetics equations via converged accelerated Taylor series (CATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.; Picca, P.; Previti, A.
This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less
Solving the Fluid Pressure Poisson Equation Using Multigrid-Evaluation and Improvements.
Dick, Christian; Rogowsky, Marcus; Westermann, Rudiger
2016-11-01
In many numerical simulations of fluids governed by the incompressible Navier-Stokes equations, the pressure Poisson equation needs to be solved to enforce mass conservation. Multigrid solvers show excellent convergence in simple scenarios, yet they can converge slowly in domains where physically separated regions are combined at coarser scales. Moreover, existing multigrid solvers are tailored to specific discretizations of the pressure Poisson equation, and they cannot easily be adapted to other discretizations. In this paper we analyze the convergence properties of existing multigrid solvers for the pressure Poisson equation in different simulation domains, and we show how to further improve the multigrid convergence rate by using a graph-based extension to determine the coarse grid hierarchy. The proposed multigrid solver is generic in that it can be applied to different kinds of discretizations of the pressure Poisson equation, by using solely the specification of the simulation domain and pre-assembled computational stencils. We analyze the proposed solver in combination with finite difference and finite volume discretizations of the pressure Poisson equation. Our evaluations show that, despite the common assumption, multigrid schemes can exploit their potential even in the most complicated simulation scenarios, yet this behavior is obtained at the price of higher memory consumption.
Burns, G Leonard; Walsh, James A; Servera, Mateu; Lorenzo-Seva, Urbano; Cardo, Esther; Rodríguez-Fornells, Antoni
2013-01-01
Exploratory structural equation modeling (SEM) was applied to a multiple indicator (26 individual symptom ratings) by multitrait (ADHD-IN, ADHD-HI and ODD factors) by multiple source (mothers, fathers and teachers) model to test the invariance, convergent and discriminant validity of the Child and Adolescent Disruptive Behavior Inventory with 872 Thai adolescents and the ADHD Rating Scale-IV and ODD scale of the Disruptive Behavior Inventory with 1,749 Spanish children. Most of the individual ADHD/ODD symptoms showed convergent and discriminant validity with the loadings and thresholds being invariant over mothers, fathers and teachers in both samples (the three latent factor means were higher for parents than teachers). The ADHD-IN, ADHD-HI and ODD latent factors demonstrated convergent and discriminant validity between mothers and fathers within the two samples. Convergent and discriminant validity between parents and teachers for the three factors was either absent (Thai sample) or only partial (Spanish sample). The application of exploratory SEM to a multiple indicator by multitrait by multisource model should prove useful for the evaluation of the construct validity of the forthcoming DSM-V ADHD/ODD rating scales.
Evaluating data-driven causal inference techniques in noisy physical and ecological systems
NASA Astrophysics Data System (ADS)
Tennant, C.; Larsen, L.
2016-12-01
Causal inference from observational time series challenges traditional approaches for understanding processes and offers exciting opportunities to gain new understanding of complex systems where nonlinearity, delayed forcing, and emergent behavior are common. We present a formal evaluation of the performance of convergent cross-mapping (CCM) and transfer entropy (TE) for data-driven causal inference under real-world conditions. CCM is based on nonlinear state-space reconstruction, and causality is determined by the convergence of prediction skill with an increasing number of observations of the system. TE is the uncertainty reduction based on transition probabilities of a pair of time-lagged variables. With TE, causal inference is based on asymmetry in information flow between the variables. Observational data and numerical simulations from a number of classical physical and ecological systems: atmospheric convection (the Lorenz system), species competition (patch-tournaments), and long-term climate change (Vostok ice core) were used to evaluate the ability of CCM and TE to infer causal-relationships as data series become increasingly corrupted by observational (instrument-driven) or process (model-or -stochastic-driven) noise. While both techniques show promise for causal inference, TE appears to be applicable to a wider range of systems, especially when the data series are of sufficient length to reliably estimate transition probabilities of system components. Both techniques also show a clear effect of observational noise on causal inference. For example, CCM exhibits a negative logarithmic decline in prediction skill as the noise level of the system increases. Changes in TE strongly depend on noise type and which variable the noise was added to. The ability of CCM and TE to detect driving influences suggest that their application to physical and ecological systems could be transformative for understanding driving mechanisms as Earth systems undergo change.
Projection scheme for a reflected stochastic heat equation with additive noise
NASA Astrophysics Data System (ADS)
Higa, Arturo Kohatsu; Pettersson, Roger
2005-02-01
We consider a projection scheme as a numerical solution of a reflected stochastic heat equation driven by a space-time white noise. Convergence is obtained via a discrete contraction principle and known convergence results for numerical solutions of parabolic variational inequalities.
Convergence of damped inertial dynamics governed by regularized maximally monotone operators
NASA Astrophysics Data System (ADS)
Attouch, Hedy; Cabot, Alexandre
2018-06-01
In a Hilbert space setting, we study the asymptotic behavior, as time t goes to infinity, of the trajectories of a second-order differential equation governed by the Yosida regularization of a maximally monotone operator with time-varying positive index λ (t). The dissipative and convergence properties are attached to the presence of a viscous damping term with positive coefficient γ (t). A suitable tuning of the parameters γ (t) and λ (t) makes it possible to prove the weak convergence of the trajectories towards zeros of the operator. When the operator is the subdifferential of a closed convex proper function, we estimate the rate of convergence of the values. These results are in line with the recent articles by Attouch-Cabot [3], and Attouch-Peypouquet [8]. In this last paper, the authors considered the case γ (t) = α/t, which is naturally linked to Nesterov's accelerated method. We unify, and often improve the results already present in the literature.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Establishment of a rotor model basis
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1982-01-01
Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
ITS evaluation and assessment issues : EC discussion paper
DOT National Transportation Integrated Search
1997-05-22
This paper has summarised the role and importance of systematic assessment and evaluation : when considering investment in Road Transport Telematics. The CONVERGE project recommends the use of five broad evaluation themes detailed above, to which we ...
A new BP Fourier algorithm and its application in English teaching evaluation
NASA Astrophysics Data System (ADS)
Pei, Xuehui; Pei, Guixin
2017-08-01
BP neural network algorithm has wide adaptability and accuracy when used in complicated system evaluation, but its calculation defects such as slow convergence have limited its practical application. The paper tries to speed up the calculation convergence of BP neural network algorithm with Fourier basis functions and presents a new BP Fourier algorithm for complicated system evaluation. First, shortages and working principle of BP algorithm are analyzed for subsequent targeted improvement; Second, the presented BP Fourier algorithm adopts Fourier basis functions to simplify calculation structure, designs new calculation transfer function between input and output layers, and conducts theoretical analysis to prove the efficiency of the presented algorithm; Finally, the presented algorithm is used in evaluating university English teaching and the application results shows that the presented BP Fourier algorithm has better performance in calculation efficiency and evaluation accuracy and can be used in evaluating complicated system practically.
Computation of steady nozzle flow by a time-dependent method
NASA Technical Reports Server (NTRS)
Cline, M. C.
1974-01-01
The equations of motion governing steady, inviscid flow are of a mixed type, that is, hyperbolic in the supersonic region and elliptic in the subsonic region. These mathematical difficulties may be removed by using the so-called time-dependent method, where the governing equations become hyperbolic everywhere. The steady-state solution may be obtained as the asymptotic solution for large time. The object of this research was to develop a production type computer program capable of solving converging, converging-diverging, and plug two-dimensional nozzle flows in computational times of 1 min or less on a CDC 6600 computer.
Emotional convergence between people over time.
Anderson, Cameron; Keltner, Dacher; John, Oliver P
2003-05-01
The authors propose that people in relationships become emotionally similar over time--as this similarity would help coordinate the thoughts and behaviors of the relationship partners, increase their mutual understanding, and foster their social cohesion. Using laboratory procedures to induce and assess emotional response, the authors found that dating partners (Study 1) and college roommates (Studies 2 and 3) became more similar in their emotional responses over the course of a year. Further, relationship partners with less power made more of the change necessary for convergence to occur. Consistent with the proposed benefits of emotional similarity, relationships whose partners were more emotionally similar were more cohesive and less likely to dissolve. Discussion focuses on implications of emotional convergence and on potential mechanisms.
Improvement of the AeroClipper system for cyclones monitoring
NASA Astrophysics Data System (ADS)
Vargas, André; Philippe, Duvel Jean
2016-07-01
The AeroClipper developed by the French space agency (Centre National d'Études Spatiales, CNES) is a quasi-lagrangian device drifting with surface wind at about 20-30m above the ocean surface. It is a new and original device for real-time and continuous observation of air-sea surface parameters in open ocean remote regions. This device enables the sampling of the variability of surface parameters in particular under convective systems toward which it is attracted. The AeroClipper is therefore an ideal instrument to monitor Tropical Cyclones (TCs) in which they are likely to converge and provide original observations to evaluate and improve our current understanding and diagnostics of TCs as well as their representation in numerical models. In 2008, the AeroClipper demonstrates its capability to be captured by an Ocean Indian cyclone, as two models have converged, without damages, in the eye of Dora cyclone during the 2008 VASCO campaign. This paper will present the improvements of this balloon system for the international project 'the Year of Maritime Continent'.
Performance assessment of multi-GNSS real-time PPP over Iran
NASA Astrophysics Data System (ADS)
Abdi, Naser; Ardalan, Alireza A.; Karimi, Roohollah; Rezvani, Mohammad-Hadi
2017-06-01
With the advent of multi-GNSS constellations and thanks to providing the real-time precise products by IGS, multi-GNSS Real-Time PPP has been of special interest to the geodetic community. These products stream in the form of RTCM-SSR through NTRIP broadcaster. In this contribution, we aim at assessing the convergence time and positioning accuracy of Real-Time PPP over Iran by means of GPS, GPS + GLONASS, GPS + BeiDou, and GPS + GLONASS + BeiDou configurations. To this end, RINEX observations of six GNSS stations, within Iranian Permanent GNSS Network (IPGN), over consecutive sixteen days were processed via BKG NTRIP Client (BNC, v 2.12). In the processing steps, the IGS-MGEX broadcast ephemerides (BRDM, provided by TUM/DLR) and the pre-saved CLK93 broadcast corrections stream (provided by CNES) have been used as the satellites known information. The numerical results were compared against the station coordinates obtained from the double-difference solutions by Bernese GPS Software v 5.0. Accordingly, we have found that GPS + BeiDou combination can reduce the convergence time by 27%, 16% and 10% and improve the positioning accuracy by 22%, 18% and 2%, in the north, east and up components, respectively, as compared with the GPS PPP. Additionally, in comparison to the GPS + GLONASS results, GPS + GLONASS + BeiDou combination speeds up the convergence time by 9%, 8% and 9% and enhance the positioning accuracy by 8%, 5% and 6%, in the north, east and up components, respectively. Overall, thanks to the availability of the current BeiDou constellation observations, the considerable decrease in the convergence time on one hand, and the improvement in the positioning accuracy on the other, can verify the efficiency of utilizing multi-GNSS PPP for real-time applications over Iran.
A Self Adaptive Differential Evolution Algorithm for Global Optimization
NASA Astrophysics Data System (ADS)
Kumar, Pravesh; Pant, Millie
This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.
Variable convergence liquid layer implosions on the National Ignition Facility
Zylstra, A. B.; Yi, S. A.; Haines, B. M.; ...
2018-03-19
Liquid layer implosions using the “wetted foam” technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. In this paper, we report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ~ 12, but we observe a significant discrepancy at CR ~ 20. Finally, this may be due tomore » suppressed hot-spot formation or an anomalous energy loss mechanism.« less
Variable convergence liquid layer implosions on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zylstra, A. B.; Yi, S. A.; Haines, B. M.
Liquid layer implosions using the “wetted foam” technique, where the liquid fuel is wicked into a supporting foam, have been recently conducted on the National Ignition Facility for the first time [Olson et al., Phys. Rev. Lett. 117, 245001 (2016)]. In this paper, we report on a series of wetted foam implosions where the convergence ratio was varied between 12 and 20. Reduced nuclear performance is observed as convergence ratio increases. 2-D radiation-hydrodynamics simulations accurately capture the performance at convergence ratios (CR) ~ 12, but we observe a significant discrepancy at CR ~ 20. Finally, this may be due tomore » suppressed hot-spot formation or an anomalous energy loss mechanism.« less
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
2014-03-01
accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.
Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less
Thermostating extended Lagrangian Born-Oppenheimer molecular dynamics
Martínez, Enrique; Cawkwell, Marc J.; Voter, Arthur F.; ...
2015-04-21
Here, Extended Lagrangian Born-Oppenheimer molecular dynamics is developed and analyzed for applications in canonical (NVT) simulations. Three different approaches are considered: the Nosé and Andersen thermostats and Langevin dynamics. We have tested the temperature distribution under different conditions of self-consistent field (SCF) convergence and time step and compared the results to analytical predictions. We find that the simulations based on the extended Lagrangian Born-Oppenheimer framework provide accurate canonical distributions even under approximate SCF convergence, often requiring only a single diagonalization per time step, whereas regular Born-Oppenheimer formulations exhibit unphysical fluctuations unless a sufficiently high degree of convergence is reached atmore » each time step. Lastly, the thermostated extended Lagrangian framework thus offers an accurate approach to sample processes in the canonical ensemble at a fraction of the computational cost of regular Born-Oppenheimer molecular dynamics simulations.« less
Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong
2018-04-10
The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Horan, Lindsay A; Ticho, Benjamin H; Khammar, Alexander J; Allen, Megan S; Shah, Birva A
2015-01-01
The Convergence Insufficiency Symptom Survey (CISS) is a questionnaire used as an outcome measure in treatment of convergence insufficiency. The current prospective randomized trial evaluates the diagnostic specificity of the CISS. Surveys were completed by 118 adolescent patients who presented for routine eye examinations. Scores were compared between patients who could be classified as having convergence insufficiency (CI) or normal binocular vision (NBV). In addition, a comparison was done between self-and practitioner-administered CISS scores within these groups. The mean CISS score did not differ significantly between NBV patients (14.1±11.3, range of 0 to 43) and CI patients (12.3±6.7, range of 3 to 28); P=0.32. Mean CISS scores were lower when physician-administered (11.4±7.9) than when self-administered (16.3±11.4); P=0.007. CISS scores tend to be higher when self-vs. practitioner-administered. This study suggests that the CISS questionnaire is not specific for convergence insufficiency. © 2015 Board of regents of the University of Wisconsin System, American Orthoptic Journal, Volume 65, 2015, ISSN 0065-955X, E-ISSN 1553-4448.
Spiral bacterial foraging optimization method: Algorithm, evaluation and convergence analysis
NASA Astrophysics Data System (ADS)
Kasaiezadeh, Alireza; Khajepour, Amir; Waslander, Steven L.
2014-04-01
A biologically-inspired algorithm called Spiral Bacterial Foraging Optimization (SBFO) is investigated in this article. SBFO, previously proposed by the same authors, is a multi-agent, gradient-based algorithm that minimizes both the main objective function (local cost) and the distance between each agent and a temporary central point (global cost). A random jump is included normal to the connecting line of each agent to the central point, which produces a vortex around the temporary central point. This random jump is also suitable to cope with premature convergence, which is a feature of swarm-based optimization methods. The most important advantages of this algorithm are as follows: First, this algorithm involves a stochastic type of search with a deterministic convergence. Second, as gradient-based methods are employed, faster convergence is demonstrated over GA, DE, BFO, etc. Third, the algorithm can be implemented in a parallel fashion in order to decentralize large-scale computation. Fourth, the algorithm has a limited number of tunable parameters, and finally SBFO has a strong certainty of convergence which is rare in existing global optimization algorithms. A detailed convergence analysis of SBFO for continuously differentiable objective functions has also been investigated in this article.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Yusuf, O B; Bamgboye, E A; Afolabi, R F; Shodimu, M A
2014-09-01
Logistic regression model is widely used in health research for description and predictive purposes. Unfortunately, most researchers are sometimes not aware that the underlying principles of the techniques have failed when the algorithm for maximum likelihood does not converge. Young researchers particularly postgraduate students may not know why separation problem whether quasi or complete occurs, how to identify it and how to fix it. This study was designed to critically evaluate convergence issues in articles that employed logistic regression analysis published in an African Journal of Medicine and medical sciences between 2004 and 2013. Problems of quasi or complete separation were described and were illustrated with the National Demographic and Health Survey dataset. A critical evaluation of articles that employed logistic regression was conducted. A total of 581 articles was reviewed, of which 40 (6.9%) used binary logistic regression. Twenty-four (60.0%) stated the use of logistic regression model in the methodology while none of the articles assessed model fit. Only 3 (12.5%) properly described the procedures. Of the 40 that used the logistic regression model, the problem of convergence occurred in 6 (15.0%) of the articles. Logistic regression tends to be poorly reported in studies published between 2004 and 2013. Our findings showed that the procedure may not be well understood by researchers since very few described the process in their reports and may be totally unaware of the problem of convergence or how to deal with it.
[Efficacy of surgery on congenital nystagmus with convergence damping].
Wang, Yuan; Wu, Qian; Bai, Dayong; Cao, Wenhong; Cui, Yanhui; Fan, Yunwei; Hu, Shoulong; Yu, Gang
2015-11-01
To evaluate the efficacy of surgery in the treatment of congenital nystagmus with convergence damping. Retrospective and comparative case series. Eight patients diagnosed as congenital nystagmus with convergence damping at Beijing Children's Hospital between September 2010 and September 2012 were enrolled in this study. The ages were 9.5 (12, 6) years old, and follow-up was 9 (24, 6) months. All patients received prism induced convergence and the same surgery of bimedial rectus recession and bilateral rectus tenotomy. The best corrected visual acuity, the range of fusion and the nystagmus waveforms were analyzed before and after surgery. The range of fusion was -3.75±1.83° to +19.38±3.16° before surgery and -3.88±1.55° to +19.00±3.02° after surgery; there was no significant difference (t=0.24, P=0.82). The binocular visual acuity increased from 0.21±0.15 without convergence to 0.28±0.18 using convergence; there was significant difference (t=-4.43, P=0.00). The visual acuity was 0.32±0.20 after surgery, significantly different from that before surgery without convergence (t=-5.29, P=0.00), but not significantly different from that before surgery using convergence (t=-2.12, P=0.07). Patients had significant improvements in the frequency (t=3.28, 3.02, P<0.05) and intensity of the nystagmus waveforms when using convergence and postoperatively (t=3.27, 3.48; P<0.05), but there was no significant improvement in the amplitude of the waveforms (t=1.31, 1.57, 0.31, P>0.05). Surgery for congenital nystagmus with convergence damping can provide expectations for ocular motor and visual results. The range of fusion should be wide enough, and the effect of convergence on the frequency is greater than that on the amplitude.
Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J
1996-02-01
The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.
Sreenivasan, Vidhyapriya; Bobier, William R
2014-07-01
Convergence insufficiency (CI) is a developmental visual anomaly defined clinically by a reduced near point of convergence, a reduced capacity to view through base-out prisms (fusional convergence); coupled with asthenopic symptoms typically blur and diplopia. Experimental studies show reduced vergence parameters and tonic adaptation. Based upon current models of accommodation and vergence, we hypothesize that the reduced vergence adaptation in CI leads to excessive amounts of convergence accommodation (CA). Eleven CI participants (mean age=17.4±2.3 years) were recruited with reduced capacity to view through increasing magnitudes of base out (BO) prisms (mean fusional convergence at 40 cm=12±0.9Δ). Testing followed our previous experimental design for (n=11) binocularly normal adults. Binocular fixation of a difference of Gaussian (DoG) target (0.2 cpd) elicited CA responses during vergence adaptation to a 12Δ BO. Vergence and CA responses were obtained at 3 min intervals over a 15 min period and time course were quantified using exponential decay functions. Results were compared to previously published data on eleven binocular normals. Eight participants completed the study. CI's showed significantly reduced magnitude of vergence adaptation (CI: 2.9Δ vs. normals: 6.6Δ; p=0.01) and CA reduction (CI=0.21 D, Normals=0.55 D; p=0.03). However, the decay time constants for adaptation and CA responses were not significantly different. CA changes were not confounded by changes in tonic accommodation (Change in TA=0.01±0.2D; p=0.8). The reduced magnitude of vergence adaptation found in CI patients resulting in higher levels of CA may potentially explain their clinical findings of reduced positive fusional vergence (PFV) and the common symptom of blur. Copyright © 2014 Elsevier B.V. All rights reserved.
Sun, Rui; Dama, James F; Tan, Jeffrey S; Rose, John P; Voth, Gregory A
2016-10-11
Metadynamics is an important enhanced sampling technique in molecular dynamics simulation to efficiently explore potential energy surfaces. The recently developed transition-tempered metadynamics (TTMetaD) has been proven to converge asymptotically without sacrificing exploration of the collective variable space in the early stages of simulations, unlike other convergent metadynamics (MetaD) methods. We have applied TTMetaD to study the permeation of drug-like molecules through a lipid bilayer to further investigate the usefulness of this method as applied to problems of relevance to medicinal chemistry. First, ethanol permeation through a lipid bilayer was studied to compare TTMetaD with nontempered metadynamics and well-tempered metadynamics. The bias energies computed from various metadynamics simulations were compared to the potential of mean force calculated from umbrella sampling. Though all of the MetaD simulations agree with one another asymptotically, TTMetaD is able to predict the most accurate and reliable estimate of the potential of mean force for permeation in the early stages of the simulations and is robust to the choice of required additional parameters. We also show that using multiple randomly initialized replicas allows convergence analysis and also provides an efficient means to converge the simulations in shorter wall times and, more unexpectedly, in shorter CPU times; splitting the CPU time between multiple replicas appears to lead to less overall error. After validating the method, we studied the permeation of a more complicated drug-like molecule, trimethoprim. Three sets of TTMetaD simulations with different choices of collective variables were carried out, and all converged within feasible simulation time. The minimum free energy paths showed that TTMetaD was able to predict almost identical permeation mechanisms in each case despite significantly different definitions of collective variables.
Trends and determinants of weight gains among OECD countries: an ecological study.
Nghiem, S; Vu, X-B; Barnett, A
2018-06-01
Obesity has become a global issue with abundant evidence to indicate that the prevalence of obesity in many nations has increased over time. The literature also reports a strong association between obesity and economic development, but the trend that obesity growth rates may converge over time has not been examined. We propose a conceptual framework and conduct an ecological analysis on the relationship between economic development and weight gain. We also test the hypothesis that weight gain converges among countries over time and examine determinants of weight gains. This is a longitudinal study of 34 Organisation for Economic Cooperation and Development (OECD) countries in the years 1980-2008 using publicly available data. We apply a dynamic economic growth model to test the hypothesis that the rate of weight gains across countries may converge over time. We also investigate the determinants of weight gains using a longitudinal regression tree analysis. We do not find evidence that the growth rates of body weight across countries converged for all countries. However, there were groups of countries in which the growth rates of body weight converge, with five groups for males and seven groups for females. The predicted growth rates of body weight peak when gross domestic product (GDP) per capita reaches US$47,000 for males and US$37,000 for females in OECD countries. National levels of consumption of sugar, fat and alcohol were the most important contributors to national weight gains. National weight gains follow an inverse U-shape curve with economic development. Excessive calorie intake is the main contributor to weight gains. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Performance Ratings: Designs for Evaluating Their Validity and Accuracy.
1986-07-01
ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of
Four-level conservative finite-difference schemes for Boussinesq paradigm equation
NASA Astrophysics Data System (ADS)
Kolkovska, N.
2013-10-01
In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.
NASA Technical Reports Server (NTRS)
Spanos, P. D.; Cao, T. T.; Hamilton, D. A.; Nelson, D. A. R.
1989-01-01
An efficient method for the load analysis of Shuttle-payload systems with linear or nonlinear attachment interfaces is presented which allows the kinematics of the interface degrees of freedom at a given time to be evaluated without calculating the combined system modal representation of the Space Shuttle and its payload. For the case of a nonlinear dynamic model, an iterative procedure is employed to converge the nonlinear terms of the equations of motion to reliable values. Results are presented for a Shuttle abort landing event.
Dynamics, morphogenesis and convergence of evolutionary quantum Prisoner's Dilemma games on networks
Yong, Xi
2016-01-01
The authors proposed a quantum Prisoner's Dilemma (PD) game as a natural extension of the classic PD game to resolve the dilemma. Here, we establish a new Nash equilibrium principle of the game, propose the notion of convergence and discover the convergence and phase-transition phenomena of the evolutionary games on networks. We investigate the many-body extension of the game or evolutionary games in networks. For homogeneous networks, we show that entanglement guarantees a quick convergence of super cooperation, that there is a phase transition from the convergence of defection to the convergence of super cooperation, and that the threshold for the phase transitions is principally determined by the Nash equilibrium principle of the game, with an accompanying perturbation by the variations of structures of networks. For heterogeneous networks, we show that the equilibrium frequencies of super-cooperators are divergent, that entanglement guarantees emergence of super-cooperation and that there is a phase transition of the emergence with the threshold determined by the Nash equilibrium principle, accompanied by a perturbation by the variations of structures of networks. Our results explore systematically, for the first time, the dynamics, morphogenesis and convergence of evolutionary games in interacting and competing systems. PMID:27118882
Possibilities for global governance of converging technologies
NASA Astrophysics Data System (ADS)
Roco, Mihail C.
2008-01-01
The convergence of nanotechnology, modern biology, the digital revolution and cognitive sciences will bring about tremendous improvements in transformative tools, generate new products and services, enable opportunities to meet and enhance human potential and social achievements, and in time reshape societal relationships. This paper focuses on the progress made in governance of such converging, emerging technologies and suggests possibilities for a global approach. Specifically, this paper suggests creating a multidisciplinary forum or a consultative coordinating group with members from various countries to address globally governance of converging, emerging technologies. The proposed framework for governance of converging technologies calls for four key functions: supporting the transformative impact of the new technologies; advancing responsible development that includes health, safety and ethical concerns; encouraging national and global partnerships; and establishing commitments to long-term planning and investments centered on human development. Principles of good governance guiding these functions include participation of all those who are forging or affected by the new technologies, transparency of governance strategies, responsibility of each participating stakeholder, and effective strategic planning. Introduction and management of converging technologies must be done with respect for immediate concerns, such as privacy, access to medical advancements, and potential human health effects. At the same time, introduction and management should also be done with respect for longer-term concerns, such as preserving human integrity, dignity and welfare. The suggested governance functions apply to four levels of governance: (a) adapting existing regulations and organizations; (b) establishing new programs, regulations and organizations specifically to handle converging technologies; (c) building capacity for addressing these issues into national policies and institutions; and (d) making international agreements and partnerships. Several possibilities for improving the governance of converging technologies in the global self-regulating ecosystem are recommended: using open-source and incentive-based models, establishing corresponding science and engineering platforms, empowering the stakeholders and promoting partnerships among them, implementing long-term planning that includes international perspectives, and institute voluntary and science-based measures for risk management.
Multi-sectorial convergence in greenhouse gas emissions.
Oliveira, Guilherme de; Bourscheidt, Deise Maria
2017-07-01
This paper uses the World Input-Output Database (WIOD) to test the hypothesis of per capita convergence in greenhouse gas (GHG) emissions for a multi-sectorial panel of countries. The empirical strategy applies conventional estimators of random and fixed effects and Arellano and Bond's (1991) GMM to the main pollutants related to the greenhouse effect. For reasonable empirical specifications, the model revealed robust evidence of per capita convergence in CH 4 emissions in the agriculture, food, and services sectors. The evidence of convergence in CO 2 emissions was moderate in the following sectors: agriculture, food, non-durable goods manufacturing, and services. In all cases, the time for convergence was less than 15 years. Regarding emissions by energy use, the largest source of global warming, there was only moderate evidence in the extractive industry sector-all other pollutants presented little or no evidence. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Thenozhi, Suresh; Tang, Yu
2018-01-01
Frequency response functions (FRF) are often used in the vibration controller design problems of mechanical systems. Unlike linear systems, the FRF derivation for nonlinear systems is not trivial due to their complex behaviors. To address this issue, the convergence property of nonlinear systems can be studied using convergence analysis. For a class of time-invariant nonlinear systems termed as convergent systems, the nonlinear FRF can be obtained. The present paper proposes a nonlinear FRF based adaptive vibration controller design for a mechanical system with cubic damping nonlinearity and a satellite system. Here the controller gains are tuned such that a desired closed-loop frequency response for a band of harmonic excitations is achieved. Unlike the system with cubic damping, the satellite system is not convergent, therefore an additional controller is utilized to achieve the convergence property. Finally, numerical examples are provided to illustrate the effectiveness of the proposed controller.
Characteristics of Convergence Learning Experience Using an Educational Documentary Film
ERIC Educational Resources Information Center
Shin, Jongho; Cho, Eunbyul
2015-01-01
The purpose of the study was to investigate the characteristics of convergence learning experience when learners study integrated learning contents from various academic subjects. Specifically, cognitive and emotional experiences and their changes over time were investigated. Eight undergraduate and graduate students participated in the study.…
ERIC Educational Resources Information Center
Fryer, Luke K.; Vermunt, Jan D.
2018-01-01
Background: Contemporary models of student learning within higher education are often inclusive of processing and regulation strategies. Considerable research has examined their use over time and their (person-centred) convergence. The longitudinal stability/variability of learning strategy use, however, is poorly understood, but essential to…
Reaction Time and Self-Report Psychopathological Assessment: Convergent and Discriminant Validity.
ERIC Educational Resources Information Center
Holden, Ronald R.; Fekken, G. Cynthia
The processing of incoming psychological information along the network, or schemata, of self-knowledge was studied to determine the convergent and discriminant validity of the patterns of schemata-specific response latencies. Fifty-three female and 52 male university students completed the Basic Personality Inventory (BPI). BPI scales assess…
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, S.; Brock, K. K.; Daly, M. J.
2009-10-15
Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion basedmore » on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.« less
Nithiananthan, S; Brock, K K; Daly, M J; Chan, H; Irish, J C; Siewerdsen, J H
2009-10-01
The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Using an open-source "symmetric" Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8+/-0.3) mm and NCC =0.99 in the cadaveric head compared to TRE=(2.6+/-1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6+/-0.9) mm compared to rigid registration TRE=(3.6+/-1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1 x 1 x 2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.
Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.
2009-01-01
Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source “symmetric” Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8±0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6±1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6±0.9) mm compared to rigid registration TRE=(3.6±1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1×1×2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies. PMID:19928106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T
2015-01-01
Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less
Convergence Speed of a Dynamical System for Sparse Recovery
NASA Astrophysics Data System (ADS)
Balavoine, Aurele; Rozell, Christopher J.; Romberg, Justin
2013-09-01
This paper studies the convergence rate of a continuous-time dynamical system for L1-minimization, known as the Locally Competitive Algorithm (LCA). Solving L1-minimization} problems efficiently and rapidly is of great interest to the signal processing community, as these programs have been shown to recover sparse solutions to underdetermined systems of linear equations and come with strong performance guarantees. The LCA under study differs from the typical L1 solver in that it operates in continuous time: instead of being specified by discrete iterations, it evolves according to a system of nonlinear ordinary differential equations. The LCA is constructed from simple components, giving it the potential to be implemented as a large-scale analog circuit. The goal of this paper is to give guarantees on the convergence time of the LCA system. To do so, we analyze how the LCA evolves as it is recovering a sparse signal from underdetermined measurements. We show that under appropriate conditions on the measurement matrix and the problem parameters, the path the LCA follows can be described as a sequence of linear differential equations, each with a small number of active variables. This allows us to relate the convergence time of the system to the restricted isometry constant of the matrix. Interesting parallels to sparse-recovery digital solvers emerge from this study. Our analysis covers both the noisy and noiseless settings and is supported by simulation results.
Chan, Lung Sang; Gao, Jian-Feng
2017-01-01
The Cathaysia Block is located in southeastern part of South China, which situates in the west Pacific subduction zone. It is thought to have undergone a compression-extension transition of the continental crust during Mesozoic-Cenozoic during the subduction of Pacific Plate beneath Eurasia-Pacific Plate, resulting in extensive magmatism, extensional basins and reactivation of fault systems. Although some mechanisms such as the trench roll-back have been generally proposed for the compression-extension transition, the timing and progress of the transition under a convergence setting remain ambiguous due to lack of suitable geological records and overprinting by later tectonic events. In this study, a numerical thermo-dynamical program was employed to evaluate how variable slab angles, thermal gradients of the lithospheres and convergence velocities would give rise to the change of crustal stress in a convergent subduction zone. Model results show that higher slab dip angle, lower convergence velocity and higher lithospheric thermal gradient facilitate the subduction process. The modeling results reveal the continental crust stress is dominated by horizontal compression during the early stage of the subduction, which could revert to a horizontal extension in the back-arc region, combing with the roll-back of the subducting slab and development of mantle upwelling. The parameters facilitating the subduction process also favor the compression-extension transition in the upper plate of the subduction zone. Such results corroborate the geology of the Cathaysia Block: the initiation of the extensional regime in the Cathaysia Block occurring was probably triggered by roll-back of the slowly subducting slab. PMID:28182640
A Newton-Krylov solver for fast spin-up of online ocean tracers
NASA Astrophysics Data System (ADS)
Lindsay, Keith
2017-01-01
We present a Newton-Krylov based solver to efficiently spin up tracers in an online ocean model. We demonstrate that the solver converges, that tracer simulations initialized with the solution from the solver have small drift, and that the solver takes orders of magnitude less computational time than the brute force spin-up approach. To demonstrate the application of the solver, we use it to efficiently spin up the tracer ideal age with respect to the circulation from different time intervals in a long physics run. We then evaluate how the spun-up ideal age tracer depends on the duration of the physics run, i.e., on how equilibrated the circulation is.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Akar, Serpil; Gokyigit, Birsen; Sayin, Nihat; Demirok, Ahmet; Yilmaz, Omer Faruk
2013-01-01
To evaluate the results of Faden operations on the medial rectus (MR) muscles with or without recession for the treatment of partially accommodative esotropia associated with a high accommodative convergence to accommodation (AC : A) ratio and to determine whether there was a decrease in the effects of posterior fixation over time. In this retrospective study, 108 of 473 patients who underwent surgery for partially accommodative esotropia with a high AC : A ratio received Faden operations on both MR muscles, and 365 received symmetric MR muscle recessions combined with a Faden operation. For the Faden operation, a satisfactory outcome of 76.9% at 1 month postoperation, decreased to 71.3% by the final follow-up visit (mean 4.8 years). A moderate positive correlation was observed between the increase in the postoperative near deviation and postoperative time. For the Faden operations combined with MR recession, a satisfactory outcome of 78.9% at 1 month post-operation, decreased to 78.4% by the final follow-up visit. A Faden operation of the MR muscles with or without recession is an effective surgical option for treating partially accommodative esotropia associated with a high AC : A ratio. For Faden operations of the MR muscles without recession, the effects of the posterior fixation decline over time.
Cichy, Radoslaw Martin; Pantazis, Dimitrios
2017-09-01
Multivariate pattern analysis of magnetoencephalography (MEG) and electroencephalography (EEG) data can reveal the rapid neural dynamics underlying cognition. However, MEG and EEG have systematic differences in sampling neural activity. This poses the question to which degree such measurement differences consistently bias the results of multivariate analysis applied to MEG and EEG activation patterns. To investigate, we conducted a concurrent MEG/EEG study while participants viewed images of everyday objects. We applied multivariate classification analyses to MEG and EEG data, and compared the resulting time courses to each other, and to fMRI data for an independent evaluation in space. We found that both MEG and EEG revealed the millisecond spatio-temporal dynamics of visual processing with largely equivalent results. Beyond yielding convergent results, we found that MEG and EEG also captured partly unique aspects of visual representations. Those unique components emerged earlier in time for MEG than for EEG. Identifying the sources of those unique components with fMRI, we found the locus for both MEG and EEG in high-level visual cortex, and in addition for MEG in low-level visual cortex. Together, our results show that multivariate analyses of MEG and EEG data offer a convergent and complimentary view on neural processing, and motivate the wider adoption of these methods in both MEG and EEG research. Copyright © 2017 Elsevier Inc. All rights reserved.
Three-Dimensional Navier-Stokes Calculations Using the Modified Space-Time CESE Method
NASA Technical Reports Server (NTRS)
Chang, Chau-lyan
2007-01-01
The space-time conservation element solution element (CESE) method is modified to address the robustness issues of high-aspect-ratio, viscous, near-wall meshes. In this new approach, the dependent variable gradients are evaluated using element edges and the corresponding neighboring solution elements while keeping the original flux integration procedure intact. As such, the excellent flux conservation property is retained and the new edge-based gradients evaluation significantly improves the robustness for high-aspect ratio meshes frequently encountered in three-dimensional, Navier-Stokes calculations. The order of accuracy of the proposed method is demonstrated for oblique acoustic wave propagation, shock-wave interaction, and hypersonic flows over a blunt body. The confirmed second-order convergence along with the enhanced robustness in handling hypersonic blunt body flow calculations makes the proposed approach a very competitive CFD framework for 3D Navier-Stokes simulations.
Hall, Deborah A; Mehta, Rajnikant L; Fackrell, Kathryn
2018-03-08
The authors respond to a letter to the editor (Sabour, 2018) concerning the interpretation of validity in the context of evaluating treatment-related change in tinnitus loudness over time. The authors refer to several landmark methodological publications and an international standard concerning the validity of patient-reported outcome measurement instruments. The tinnitus loudness rating performed better against our reported acceptability criteria for (face and convergent) validity than did the tinnitus loudness matching test. It is important to distinguish between tests that evaluate the validity of measuring treatment-related change over time and tests that quantify the accuracy of diagnosing tinnitus as a case and non-case.
Methodological convergence of program evaluation designs.
Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa
2014-01-01
Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.
Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.
Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C
2016-10-01
The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two methods showed decreased reconstruction time and no loss of performance when compared to conventional OE alone. OE-reconstructed images were found to be quantitatively and qualitatively similar to MLEM, yet OE also provided estimates of image uncertainty. Some acceleration of the reconstruction can be gained through the use of parallel computing. The OE algorithm is useful for reconstructing multiple-pinhole SPECT data and can be easily modified for real-time reconstruction.
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
Accelerated convergence for synchronous approximate agreement
NASA Technical Reports Server (NTRS)
Kearns, J. P.; Park, S. K.; Sjogren, J. A.
1988-01-01
The protocol for synchronous approximate agreement presented by Dolev et. al. exhibits the undesirable property that a faulty processor, by the dissemination of a value arbitrarily far removed from the values held by good processors, may delay the termination of the protocol by an arbitrary amount of time. Such behavior is clearly undesirable in a fault tolerant dynamic system subject to hard real-time constraints. A mechanism is presented by which editing data suspected of being from Byzantine-failed processors can lead to quicker, predictable, convergence to an agreement value. Under specific assumptions about the nature of values transmitted by failed processors relative to those transmitted by good processors, a Monte Carlo simulation is presented whose qualitative results illustrate the trade-off between accelerated convergence and the accuracy of the value agreed upon.
NASA Astrophysics Data System (ADS)
Kim, Chang-Goo; Ostriker, Eve C.
2017-09-01
We introduce TIGRESS, a novel framework for multi-physics numerical simulations of the star-forming interstellar medium (ISM) implemented in the Athena MHD code. The algorithms of TIGRESS are designed to spatially and temporally resolve key physical features, including: (1) the gravitational collapse and ongoing accretion of gas that leads to star formation in clusters; (2) the explosions of supernovae (SNe), both near their progenitor birth sites and from runaway OB stars, with time delays relative to star formation determined by population synthesis; (3) explicit evolution of SN remnants prior to the onset of cooling, which leads to the creation of the hot ISM; (4) photoelectric heating of the warm and cold phases of the ISM that tracks the time-dependent ambient FUV field from the young cluster population; (5) large-scale galactic differential rotation, which leads to epicyclic motion and shears out overdense structures, limiting large-scale gravitational collapse; (6) accurate evolution of magnetic fields, which can be important for vertical support of the ISM disk as well as angular momentum transport. We present tests of the newly implemented physics modules, and demonstrate application of TIGRESS in a fiducial model representing the solar neighborhood environment. We use a resolution study to demonstrate convergence and evaluate the minimum resolution {{Δ }}x required to correctly recover several ISM properties, including the star formation rate, wind mass-loss rate, disk scale height, turbulent and Alfvénic velocity dispersions, and volume fractions of warm and hot phases. For the solar neighborhood model, all these ISM properties are converged at {{Δ }}x≤slant 8 {pc}.
NASA Astrophysics Data System (ADS)
Oliveira, José J.
2017-10-01
In this paper, we investigate the global convergence of solutions of non-autonomous Hopfield neural network models with discrete time-varying delays, infinite distributed delays, and possible unbounded coefficient functions. Instead of using Lyapunov functionals, we explore intrinsic features between the non-autonomous systems and their asymptotic systems to ensure the boundedness and global convergence of the solutions of the studied models. Our results are new and complement known results in the literature. The theoretical analysis is illustrated with some examples and numerical simulations.
Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.
Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu
2016-01-01
The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
Time-derivative preconditioning for viscous flows
NASA Technical Reports Server (NTRS)
Choi, Yunho; Merkle, Charles L.
1991-01-01
A time-derivative preconditioning algorithm that is effective over a wide range of flow conditions from inviscid to very diffusive flows and from low speed to supersonic flows was developed. This algorithm uses a viscous set of primary dependent variables to introduce well-conditioned eigenvalues and to avoid having a nonphysical time reversal for viscous flow. The resulting algorithm also provides a mechanism for controlling the inviscid and viscous time step parameters to be of order one for very diffusive flows, thereby ensuring rapid convergence at very viscous flows as well as for inviscid flows. Convergence capabilities are demonstrated through computation of a wide variety of problems.
Nonverbal Accommodation in Healthcare Communication
D’Agostino, Thomas A.; Bylund, Carma L.
2016-01-01
This exploratory study examined patterns of nonverbal accommodation within healthcare interactions and investigated the impact of communication skills training and gender concordance on nonverbal accommodation behavior. The Nonverbal Accommodation Analysis System (NAAS) was used to code the nonverbal behavior of physicians and patients within 45 oncology consultations. Cases were then placed in one of seven categories based on patterns of accommodation observed across the interaction. Results indicated that across all NAAS behavior categories, physician-patient interactions were most frequently categorized as Joint Convergence, followed closely by Asymmetrical-Patient Convergence. Among paraverbal behaviors, talk time, interruption, and pausing were most frequently characterized by Joint Convergence. Among nonverbal behaviors, eye contact, laughing, and gesturing were most frequently categorized as Asymmetrical-Physician Convergence. Differences were predominantly non-significant in terms of accommodation behavior between pre and post-communication skills training interactions. Only gesturing proved significant, with post-communication skills training interactions more likely to be categorized as Joint Convergence or Asymmetrical-Physician Convergence. No differences in accommodation were noted between gender concordant and non-concordant interactions. The importance of accommodation behavior in healthcare communication is considered from a patient-centered care perspective. PMID:24138223
Kriebel, Ricardo; Khabbazian, Mohammad; Sytsma, Kenneth J
2017-01-01
The study of pollen morphology has historically allowed evolutionary biologists to assess phylogenetic relationships among Angiosperms, as well as to better understand the fossil record. During this process, pollen has mainly been studied by discretizing some of its main characteristics such as size, shape, and exine ornamentation. One large plant clade in which pollen has been used this way for phylogenetic inference and character mapping is the order Myrtales, composed by the small families Alzateaceae, Crypteroniaceae, and Penaeaceae (collectively the "CAP clade"), as well as the large families Combretaceae, Lythraceae, Melastomataceae, Myrtaceae, Onagraceae and Vochysiaceae. In this study, we present a novel way to study pollen evolution by using quantitative size and shape variables. We use morphometric and morphospace methods to evaluate pollen change in the order Myrtales using a time-calibrated, supermatrix phylogeny. We then test for conservatism, divergence, and morphological convergence of pollen and for correlation between the latitudinal gradient and pollen size and shape. To obtain an estimate of shape, Myrtales pollen images were extracted from the literature, and their outlines analyzed using elliptic Fourier methods. Shape and size variables were then analyzed in a phylogenetic framework under an Ornstein-Uhlenbeck process to test for shifts in size and shape during the evolutionary history of Myrtales. Few shifts in Myrtales pollen morphology were found which indicates morphological conservatism. Heterocolpate, small pollen is ancestral with largest pollen in Onagraceae. Convergent shifts in shape but not size occurred in Myrtaceae and Onagraceae and are correlated to shifts in latitude and biogeography. A quantitative approach was applied for the first time to examine pollen evolution across a large time scale. Using phylogenetic based morphometrics and an OU process, hypotheses of pollen size and shape were tested across Myrtales. Convergent pollen shifts and position in the latitudinal gradient support the selective role of harmomegathy, the mechanism by which pollen grains accommodate their volume in response to water loss.
Khabbazian, Mohammad; Sytsma, Kenneth J.
2017-01-01
The study of pollen morphology has historically allowed evolutionary biologists to assess phylogenetic relationships among Angiosperms, as well as to better understand the fossil record. During this process, pollen has mainly been studied by discretizing some of its main characteristics such as size, shape, and exine ornamentation. One large plant clade in which pollen has been used this way for phylogenetic inference and character mapping is the order Myrtales, composed by the small families Alzateaceae, Crypteroniaceae, and Penaeaceae (collectively the “CAP clade”), as well as the large families Combretaceae, Lythraceae, Melastomataceae, Myrtaceae, Onagraceae and Vochysiaceae. In this study, we present a novel way to study pollen evolution by using quantitative size and shape variables. We use morphometric and morphospace methods to evaluate pollen change in the order Myrtales using a time-calibrated, supermatrix phylogeny. We then test for conservatism, divergence, and morphological convergence of pollen and for correlation between the latitudinal gradient and pollen size and shape. To obtain an estimate of shape, Myrtales pollen images were extracted from the literature, and their outlines analyzed using elliptic Fourier methods. Shape and size variables were then analyzed in a phylogenetic framework under an Ornstein-Uhlenbeck process to test for shifts in size and shape during the evolutionary history of Myrtales. Few shifts in Myrtales pollen morphology were found which indicates morphological conservatism. Heterocolpate, small pollen is ancestral with largest pollen in Onagraceae. Convergent shifts in shape but not size occurred in Myrtaceae and Onagraceae and are correlated to shifts in latitude and biogeography. A quantitative approach was applied for the first time to examine pollen evolution across a large time scale. Using phylogenetic based morphometrics and an OU process, hypotheses of pollen size and shape were tested across Myrtales. Convergent pollen shifts and position in the latitudinal gradient support the selective role of harmomegathy, the mechanism by which pollen grains accommodate their volume in response to water loss. PMID:29211730
Tail shortening by discrete hydrodynamics
NASA Astrophysics Data System (ADS)
Kiefer, J.; Visscher, P. B.
1982-02-01
A discrete formulation of hydrodynamics was recently introduced, whose most important feature is that it is exactly renormalizable. Previous numerical work has found that it provides a more efficient and rapidly convergent method for calculating transport coefficients than the usual Green-Kubo method. The latter's convergence difficulties are due to the well-known "long-time tail" of the time correlation function which must be integrated over time. The purpose of the present paper is to present additional evidence that these difficulties are really absent in the discrete equation of motion approach. The "memory" terms in the equation of motion are calculated accurately, and shown to decay much more rapidly with time than the equilibrium time correlations do.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Poe, Steven
2005-01-01
The reconstruction of phylogeny requires homologous similarities across species. Characters that have been shown to evolve quickly or convergently in some species are often considered to be poor phylogenetic markers. Here I evaluate the phylogenetic utility of a set of morphological characters that are correlated with ecology and have been shown to evolve convergently in Anolis lizards in the Greater Antilles. Results of randomization tests suggest that these "ecomorph" characters are adequate phylogenetic markers, both for Anolis in general and for the Greater Antillean species for which ecomorphological convergence was originally documented. Explanations for this result include the presence of ecomorphologically similar species within evolutionary radiations within islands, some monophyly of ecomorphs across islands, and the existence of several species that defy ecomorphological characterization but share phylogenetic similarity in some ecomorph characters.
Performance of the Swedish version of the Revised Piper Fatigue Scale.
Jakobsson, Sofie; Taft, Charles; Östlund, Ulrika; Ahlberg, Karin
2013-12-01
The Revised Piper Fatigue scale is one of the most widely used instruments internationally to assess cancer-related fatigue. The aim of the present study was to evaluate selected psychometric properties of a Swedish version of the RPFS (SPFS). An earlier translation of the SPFS was further evaluated and developed. The new version was mailed to 300 patients undergoing curative radiotherapy. The internal validity was assessed using Principal Axis Factor Analysis with oblimin rotation and multitrait analysis. External validity was examined in relation to the Multidimensional Fatigue Inventory-20 (MFI-20) and in known-groups analyses. Totally 196 patients (response rate = 65%) returned evaluable questionnaires. Principal axis factoring analysis yielded three factors (74% of the variance) rather than four as in the original RPFS. Multitrait analyses confirmed the adequacy of scaling assumptions. Known-groups analyses failed to support the discriminative validity. Concurrent validity was satisfactory. The new Swedish version of the RPFS showed good acceptability, reliability and convergent and- discriminant item-scale validity. Our results converge with other international versions of the RPFS in failing to support the four-dimension conceptual model of the instrument. Hence, RPFS suitability for use in international comparisons may be limited which also may have implications for cross-cultural validity of the newly released 12-item version of the RPFS. Further research on the Swedish version should address reasons for high missing rates for certain items in the subscale of affective meaning, further evaluation of the discriminative validity and assessment of its sensitivity in detecting changes over time. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Arruabarrena, M. Ignacia; de Paul, Joaquin
1992-01-01
"Convergent validity" of preliminary Spanish version of Child Abuse Potential (CAP) Inventory was studied. CAP uses ecological-systemic model of child maltreatment to evaluate individual, family, and social factors facilitating physical child abuse. Depression and marital adjustment were measured in three groups of mothers. Results found…
Application of fully stressed design procedures to redundant and non-isotropic structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Tsach, U.
1980-01-01
An evaluation is presented of fully stressed design procedures for sizing highly redundant structures including structures made of composite materials. The evaluation is carried out by sizing three structures: a simple box beam of either composite or metal construction; a low aspect ratio titanium wing; and a titanium arrow wing for a conceptual supersonic cruise aircraft. All three structures are sized by ordinary fully-stressed design (FSD) and thermal fully stressed design (TFSD) for combined mechanical and thermal loads. Where possible, designs are checked by applying rigorous mathematical programming techniques to the structures. It is found that FSD and TFSD produce optimum designs for the metal box beam, but produce highly non-optimum designs for the composite box beam. Results from the delta wing and arrow wing indicate that FSD and TFSD exhibits slow convergence for highly redundant metal structures. Further, TFSD exhibits slow oscillatory convergence behavior for the arrow wing for very high temperatures. In all cases where FSD and TFSD perform poorly either in obtaining nonoptimum designs or in converging slowly, the assumptions on which the algorithms are based are grossly violated. The use of scaling, however, is found to be very effective in obtaining fast convergence and efficiently produces safe designs even for those cases when FSD and TFSD alone are ineffective.
Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.
2013-01-01
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537
Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E
2013-04-18
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.
Non-adaptive and adaptive hybrid approaches for enhancing water quality management
NASA Astrophysics Data System (ADS)
Kalwij, Ineke M.; Peralta, Richard C.
2008-09-01
SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.
ERIC Educational Resources Information Center
Roelofs, Ardi
2010-01-01
Disagreement exists about whether color-word Stroop facilitation is caused by converging information (e.g., Cohen et al., 1990; Roelofs, 2003) or inadvertent reading (MacLeod & MacDonald, 2000). Four experiments tested between these hypotheses by examining Stroop effects on response time (RT) both within and between languages. Words cannot be…
Teacher Unions, School Districts, Universities, Governments: Time to Tango and Promote Convergence?
ERIC Educational Resources Information Center
Naylor, Charlie
2007-01-01
This paper considers "convergence" as deliberate acts of will to achieve common goals within the context of the education service in general and school sector industrial relations in particular. Such language is unusual in the field of industrial relations, where assumptions are often based on notions of conflictual relationships.…
NASA Astrophysics Data System (ADS)
Le Breton, E.; Handy, M.; Ustaszewski, K. M.
2015-12-01
The Adriatic microplate (Adria) is a key player in the geodynamics of the Western Mediterranean area because it separates two major plates, Africa and Europe, that have been converging since Late Cretaceous time. Today, Adria comprises only continental lithosphere and is surrounded by zones of distributed deformation along convergent boundaries (Alps, Apennines, Calabrian Arc, Dinarides-Hellenides,) and back-arc basins (Liguro-Provencal, Tyrrhenian). For a long time, Adria was thought to be a promontory of Africa and thus to have moved coherently with Africa. However, recent re-evaluation of geological and geophysical data from the Alps yields an independent motion path for Adria that features a significant change in the direction and rate of its motion relative to both Africa and Europe since late Cretaceous time. To evaluate this, we first compare existing plate reconstructions of the Western Mediterranean to develop a best-fit model for the motion of Africa, Iberia and the Corsica-Sardinia block relative to Europe. We then use two motion models for Adria in which Adria moved either coherently or independently of Africa since late Cretaceous time. The model for independent Adria motion is further constrained by new estimates of extension and shortening in the Western Mediterranean and Northern Apennines based on field observations and recently published Moho depth maps, seismic profiles along the Gulf of Lion - Sardinian passive margins and the Northern Apennines. Initial results suggest that Miocene extension and opening of the Liguro-Provencal basin exceeds Miocene-to-Recent shortening related to roll-back subduction in the Northern Apennines; we attribute this to counter-clockwise rotation of the Adriatic plate with respect to Europe. Combined with the previously published estimates of shortening in the Alps, this counter-clockwise motion is predicted to have produced significantly less post-Paleogene, orogen-normal shortening in the Dinarides than previously thought. This modified motion path for Adria raises the question of what forces drive the motion of Adria; so far, the most likely explanation invokes a combination of trench suction and slab pull along the northern borders of Adria in Late Cretaceous-Paleogene time, transitional to Africa push since Early Miocene time.
O'Brien, Haley D; Faith, J Tyler; Jenkins, Kirsten E; Peppe, Daniel J; Plummer, Thomas W; Jacobs, Zenobia L; Li, Bo; Joannes-Boyau, Renaud; Price, Gilbert; Feng, Yue-Xing; Tryon, Christian A
2016-02-22
The fossil record provides tangible, historical evidence for the mode and operation of evolution across deep time. Striking patterns of convergence are some of the strongest examples of these operations, whereby, over time, similar environmental and/or behavioral pressures precipitate similarity in form and function between disparately related taxa. Here we present fossil evidence for an unexpected convergence between gregarious plant-eating mammals and dinosaurs. Recent excavations of Late Pleistocene deposits on Rusinga Island, Kenya, have uncovered a catastrophic assemblage of the wildebeest-like bovid Rusingoryx atopocranion. Previously known from fragmentary material, these new specimens reveal large, hollow, osseous nasal crests: a craniofacial novelty for mammals that is remarkably comparable to the nasal crests of lambeosaurine hadrosaur dinosaurs. Using adult and juvenile material from this assemblage, as well as computed tomographic imaging, we investigate this convergence from morphological, developmental, functional, and paleoenvironmental perspectives. Our detailed analyses reveal broad parallels between R. atopocranion and basal Lambeosaurinae, suggesting that osseous nasal crests may require a highly specific combination of ontogeny, evolution, and environmental pressures in order to develop. Copyright © 2016 Elsevier Ltd. All rights reserved.
Algorithm for computing descriptive statistics for very large data sets and the exa-scale era
NASA Astrophysics Data System (ADS)
Beekman, Izaak
2017-11-01
An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.
Reliable spacecraft rendezvous without velocity measurement
NASA Astrophysics Data System (ADS)
He, Shaoming; Lin, Defu
2018-03-01
This paper investigates the problem of finite-time velocity-free autonomous rendezvous for spacecraft in the presence of external disturbances during the terminal phase. First of all, to address the problem of lack of relative velocity measurement, a robust observer is proposed to estimate the unknown relative velocity information in a finite time. It is shown that the effect of external disturbances on the estimation precision can be suppressed to a relatively low level. With the reconstructed velocity information, a finite-time output feedback control law is then formulated to stabilize the rendezvous system. Theoretical analysis and rigorous proof show that the relative position and its rate can converge to a small compacted region in finite time. Numerical simulations are performed to evaluate the performance of the proposed approach in the presence of external disturbances and actuator faults.
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
Inexact adaptive Newton methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertiger, W.I.; Kelsey, F.J.
1985-02-01
The Inexact Adaptive Newton method (IAN) is a modification of the Adaptive Implicit Method/sup 1/ (AIM) with improved Newton convergence. Both methods simplify the Jacobian at each time step by zeroing coefficients in regions where saturations are changing slowly. The methods differ in how the diagonal block terms are treated. On test problems with up to 3,000 cells, IAN consistently saves approximately 30% of the CPU time when compared to the fully implicit method. AIM shows similar savings on some problems, but takes as much CPU time as fully implicit on other test problems due to poor Newton convergence.
2009-01-01
Background The Neotropical ovenbird-woodcreeper family (Furnariidae) is an avian group characterized by exceptionally diverse ecomorphological adaptations. For instance, members of the family are known to construct nests of a remarkable variety. This offers a unique opportunity to examine whether changes in nest design, accompanied by expansions into new habitats, facilitates diversification. We present a multi-gene phylogeny and age estimates for the ovenbird-woodcreeper family and use these results to estimate the degree of convergent evolution in both phenotype and habitat utilisation. Furthermore, we discuss whether variation in species richness among ovenbird clades could be explained by differences in clade-specific diversification rates, and whether these rates differ among lineages with different nesting habits. In addition, the systematic positions of some enigmatic ovenbird taxa and the postulated monophyly of some species-rich genera are evaluated. Results The phylogenetic results reveal new examples of convergent evolution and show that ovenbirds have independently colonized open habitats at least six times. The calculated age estimates suggest that the ovenbird-woodcreeper family started to diverge at ca 33 Mya, and that the timing of habitat shifts into open environments may be correlated with the aridification of South America during the last 15 My. The results also show that observed large differences in species richness among clades can be explained by a substantial variation in net diversification rates. The synallaxines, which generally are adapted to dry habitats and build exposed vegetative nests, had the highest diversification rate of all major furnariid clades. Conclusion Several key features may have played an important role for the radiation and evolution of convergent phenotypes in the ovenbird-woodcreeper family. Our results suggest that changes in nest building strategy and adaptation to novel habitats may have played an important role in a diversification that included multiple radiations into more open and bushy environments. The synallaxines were found to have had a particularly high diversification rate, which may be explained by their ability to build exposed vegetative nests and thus to expand into a variety of novel habitats that emerged during a period of cooling and aridification in South America. PMID:19930590
Irestedt, Martin; Fjeldså, Jon; Dalén, Love; Ericson, Per G P
2009-11-21
The Neotropical ovenbird-woodcreeper family (Furnariidae) is an avian group characterized by exceptionally diverse ecomorphological adaptations. For instance, members of the family are known to construct nests of a remarkable variety. This offers a unique opportunity to examine whether changes in nest design, accompanied by expansions into new habitats, facilitates diversification. We present a multi-gene phylogeny and age estimates for the ovenbird-woodcreeper family and use these results to estimate the degree of convergent evolution in both phenotype and habitat utilisation. Furthermore, we discuss whether variation in species richness among ovenbird clades could be explained by differences in clade-specific diversification rates, and whether these rates differ among lineages with different nesting habits. In addition, the systematic positions of some enigmatic ovenbird taxa and the postulated monophyly of some species-rich genera are evaluated. The phylogenetic results reveal new examples of convergent evolution and show that ovenbirds have independently colonized open habitats at least six times. The calculated age estimates suggest that the ovenbird-woodcreeper family started to diverge at ca 33 Mya, and that the timing of habitat shifts into open environments may be correlated with the aridification of South America during the last 15 My. The results also show that observed large differences in species richness among clades can be explained by a substantial variation in net diversification rates. The synallaxines, which generally are adapted to dry habitats and build exposed vegetative nests, had the highest diversification rate of all major furnariid clades. Several key features may have played an important role for the radiation and evolution of convergent phenotypes in the ovenbird-woodcreeper family. Our results suggest that changes in nest building strategy and adaptation to novel habitats may have played an important role in a diversification that included multiple radiations into more open and bushy environments. The synallaxines were found to have had a particularly high diversification rate, which may be explained by their ability to build exposed vegetative nests and thus to expand into a variety of novel habitats that emerged during a period of cooling and aridification in South America.
NASA Astrophysics Data System (ADS)
Li, Jiao; Hu, Guijun; Gong, Caili; Li, Li
2018-02-01
In this paper, we propose a hybrid time-frequency domain sign-sign joint decision multimodulus algorithm (Hybrid-SJDMMA) for mode-demultiplexing in a 6 × 6 mode division multiplexing (MDM) system with high-order QAM modulation. The equalization performance of Hybrid-SJDMMA was evaluated and compared with the frequency domain multimodulus algorithm (FD-MMA) and the hybrid time-frequency domain sign-sign multimodulus algorithm (Hybrid-SMMA). Simulation results revealed that Hybrid-SJDMMA exhibits a significantly lower computational complexity than FD-MMA, and its convergence speed is similar to that of FD-MMA. Additionally, the bit-error-rate performance of Hybrid-SJDMMA was obviously better than FD-MMA and Hybrid-SMMA for 16 QAM and 64 QAM.
Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.
Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar
2014-01-01
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
Structural and reliability analysis of quality of relationship index in cancer patients.
Cousson-Gélie, Florence; de Chalvron, Stéphanie; Zozaya, Carole; Lafaye, Anaïs
2013-01-01
Among psychosocial factors affecting emotional adjustment and quality of life, social support is one of the most important and widely studied in cancer patients, but little is known about the perception of support in specific significant relationships in patients with cancer. This study examined the psychometric properties of the Quality of Relationship Inventory (QRI) by evaluating its factor structure and its convergent and discriminant validity in a sample of cancer patients. A total of 388 patients completed the QRI. Convergent validity was evaluated by testing the correlations between the QRI subscales and measures of general social support, anxiety and depression symptoms. Discriminant validity was examined by testing group comparison. The QRI's longitudinal invariance across time was also tested. Principal axis factor analysis with promax rotation identified three factors accounting for 42.99% of variance: perceived social support, depth, and interpersonal conflict. Estimates of reliability with McDonald's ω coefficient were satisfactory for all the QRI subscales (ω ranging from 0.75 - 0.85). Satisfaction from general social support was negatively correlated with the interpersonal conflict subscale and positively with the depth subscale. The interpersonal conflict and social support scales were correlated with depression and anxiety scores. We also found a relative stability of QRI subscales (measured 3 months after the first evaluation) and differences between partner status and gender groups. The Quality of Relationship Inventory is a valid tool for assessing the quality of social support in a particular relationship with cancer patients.
Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods
NASA Astrophysics Data System (ADS)
Ma, Chengbin; Hori, Yoichi
Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
Psychometric evaluation of the Swedish version of Rosenberg's self-esteem scale.
Eklund, Mona; Bäckström, Martin; Hansson, Lars
2018-04-01
The widely used Rosenberg's self-esteem scale (RSES) has not been evaluated for psychometric properties in Sweden. This study aimed at analyzing its factor structure, internal consistency, criterion, convergent and discriminant validity, sensitivity to change, and whether a four-graded Likert-type response scale increased its reliability and validity compared to a yes/no response scale. People with mental illness participating in intervention studies to (1) promote everyday life balance (N = 223) or (2) remedy self-stigma (N = 103) were included. Both samples completed the RSES and questionnaires addressing quality of life and sociodemographic data. Sample 1 also completed instruments chosen to assess convergent and discriminant validity: self-mastery (convergent validity), level of functioning and occupational engagement (discriminant validity). Confirmatory factor analysis (CFA), structural equation modeling, and conventional inferential statistics were used. Based on both samples, the Swedish RSES formed one factor and exhibited high internal consistency (>0.90). The two response scales were equivalent. Criterion validity in relation to quality of life was demonstrated. RSES could distinguish between women and men (women scoring lower) and between diagnostic groups (people with depression scoring lower). Correlations >0.5 with variables chosen to reflect convergent validity and around 0.2 with variables used to address discriminant validity further highlighted the construct validity of RSES. The instrument also showed sensitivity to change. The Swedish RSES exhibited a one-component factor structure and showed good psychometric properties in terms of good internal consistency, criterion, convergent and discriminant validity, and sensitivity to change. The yes/no and the four-graded Likert-type response scales worked equivalently.
Operation quality assessment model for video conference system
NASA Astrophysics Data System (ADS)
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
2018-01-01
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Globalization and Contemporary Fertility Convergence.
Hendi, Arun S
2017-09-01
The rise of the global network of nation-states has precipitated social transformations throughout the world. This article examines the role of political and economic globalization in driving fertility convergence across countries between 1965 and 2009. While past research has typically conceptualized fertility change as a country-level process, this study instead employs a theoretical and methodological framework that examines differences in fertility between pairs of countries over time. Convergence in fertility between pairs of countries is hypothesized to result from increased cross-country connectedness and cross-national transmission of fertility-related schemas. I investigate the impact of various cross-country ties, including ties through bilateral trade, intergovernmental organizations, and regional trade blocs, on fertility convergence. I find that globalization acts as a form of social interaction to produce fertility convergence. There is significant heterogeneity in the effects of different cross-country ties. In particular, trade with rich model countries, joint participation in the UN and UNESCO, and joining a free trade agreement all contribute to fertility convergence between countries. Whereas the prevailing focus in fertility research has been on factors producing fertility declines, this analysis highlights specific mechanisms-trade and connectedness through organizations-leading to greater similarity in fertility across countries. Globalization is a process that propels the spread of culturally laden goods and schemas impinging on fertility, which in turn produces fertility convergence.
Mishra, Asht M.; Pal, Ajay; Gupta, Disha
2017-01-01
Key points Pairing motor cortex stimulation and spinal cord epidural stimulation produced large augmentation in motor cortex evoked potentials if they were timed to converge in the spinal cord.The modulation of cortical evoked potentials by spinal cord stimulation was largest when the spinal electrodes were placed over the dorsal root entry zone.Repeated pairing of motor cortex and spinal cord stimulation caused lasting increases in evoked potentials from both sites, but only if the time between the stimuli was optimal.Both immediate and lasting effects of paired stimulation are likely mediated by convergence of descending motor circuits and large diameter afferents onto common interneurons in the cervical spinal cord. Abstract Convergent activity in neural circuits can generate changes at their intersection. The rules of paired electrical stimulation are best understood for protocols that stimulate input circuits and their targets. We took a different approach by targeting the interaction of descending motor pathways and large diameter afferents in the spinal cord. We hypothesized that pairing stimulation of motor cortex and cervical spinal cord would strengthen motor responses through their convergence. We placed epidural electrodes over motor cortex and the dorsal cervical spinal cord in rats; motor evoked potentials (MEPs) were measured from biceps. MEPs evoked from motor cortex were robustly augmented with spinal epidural stimulation delivered at an intensity below the threshold for provoking an MEP. Augmentation was critically dependent on the timing and position of spinal stimulation. When the spinal stimulation was timed to coincide with the descending volley from motor cortex stimulation, MEPs were more than doubled. We then tested the effect of repeated pairing of motor cortex and spinal stimulation. Repetitive pairing caused strong augmentation of cortical MEPs and spinal excitability that lasted up to an hour after just 5 min of pairing. Additional physiology experiments support the hypothesis that paired stimulation is mediated by convergence of descending motor circuits and large diameter afferents in the spinal cord. The large effect size of this protocol and the conservation of the circuits being manipulated between rats and humans makes it worth pursuing for recovery of sensorimotor function after injury to the central nervous system. PMID:28752624
Mishra, Asht M; Pal, Ajay; Gupta, Disha; Carmel, Jason B
2017-11-15
Pairing motor cortex stimulation and spinal cord epidural stimulation produced large augmentation in motor cortex evoked potentials if they were timed to converge in the spinal cord. The modulation of cortical evoked potentials by spinal cord stimulation was largest when the spinal electrodes were placed over the dorsal root entry zone. Repeated pairing of motor cortex and spinal cord stimulation caused lasting increases in evoked potentials from both sites, but only if the time between the stimuli was optimal. Both immediate and lasting effects of paired stimulation are likely mediated by convergence of descending motor circuits and large diameter afferents onto common interneurons in the cervical spinal cord. Convergent activity in neural circuits can generate changes at their intersection. The rules of paired electrical stimulation are best understood for protocols that stimulate input circuits and their targets. We took a different approach by targeting the interaction of descending motor pathways and large diameter afferents in the spinal cord. We hypothesized that pairing stimulation of motor cortex and cervical spinal cord would strengthen motor responses through their convergence. We placed epidural electrodes over motor cortex and the dorsal cervical spinal cord in rats; motor evoked potentials (MEPs) were measured from biceps. MEPs evoked from motor cortex were robustly augmented with spinal epidural stimulation delivered at an intensity below the threshold for provoking an MEP. Augmentation was critically dependent on the timing and position of spinal stimulation. When the spinal stimulation was timed to coincide with the descending volley from motor cortex stimulation, MEPs were more than doubled. We then tested the effect of repeated pairing of motor cortex and spinal stimulation. Repetitive pairing caused strong augmentation of cortical MEPs and spinal excitability that lasted up to an hour after just 5 min of pairing. Additional physiology experiments support the hypothesis that paired stimulation is mediated by convergence of descending motor circuits and large diameter afferents in the spinal cord. The large effect size of this protocol and the conservation of the circuits being manipulated between rats and humans makes it worth pursuing for recovery of sensorimotor function after injury to the central nervous system. © 2017 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
A Variational Assimilation Method for Satellite and Conventional Data: a Revised Basic Model 2B
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Scott, Robert W.; Chen, J.
1991-01-01
A variational objective analysis technique that modifies observations of temperature, height, and wind on the cyclone scale to satisfy the five 'primitive' model forecast equations is presented. This analysis method overcomes all of the problems that hindered previous versions, such as over-determination, time consistency, solution method, and constraint decoupling. A preliminary evaluation of the method shows that it converges rapidly, the divergent part of the wind is strongly coupled in the solution, fields of height and temperature are well-preserved, and derivative quantities such as vorticity and divergence are improved. Problem areas are systematic increases in the horizontal velocity components, and large magnitudes of the local tendencies of the horizontal velocity components. The preliminary evaluation makes note of these problems but detailed evaluations required to determine the origin of these problems await future research.
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
A genuine nonlinear approach for controller design of a boiler-turbine system.
Yang, Shizhong; Qian, Chunjiang; Du, Haibo
2012-05-01
This paper proposes a genuine nonlinear approach for controller design of a drum-type boiler-turbine system. Based on a second order nonlinear model, a finite-time convergent controller is first designed to drive the states to their setpoints in a finite time. In the case when the state variables are unmeasurable, the system will be regulated using a constant controller or an output feedback controller. An adaptive controller is also designed to stabilize the system since the model parameters may vary under different operating points. The novelty of the proposed controller design approach lies in fully utilizing the system nonlinearities instead of linearizing or canceling them. In addition, the newly developed techniques for finite-time convergent controller are used to guarantee fast convergence of the system. Simulations are conducted under different cases and the results are presented to illustrate the performance of the proposed controllers. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, M.J.; Bourke, W.; Browning, G.L.
The convergence of spectral model numerical solutions of the global shallow-water equations is examined as a function of the time step and the spectral truncation. The contributions to the errors due to the spatial and temporal discretizations are separately identified and compared. Numerical convergence experiments are performed with the inviscid equations from smooth (Rossby-Haurwitz wave) and observed (R45 atmospheric analysis) initial conditions, and also with the diffusive shallow-water equations. Results are compared with the forced inviscid shallow-water equations case studied by Browning et al. Reduction of the time discretization error by the removal of fast waves from the solution usingmore » initialization is shown. The effects of forcing and diffusion on the convergence are discussed. Time truncation errors are found to dominate when a feature is large scale and well resolved; spatial truncation errors dominate for small-scale features and also for large scale after the small scales have affected them. Possible implications of these results for global atmospheric modeling are discussed. 31 refs., 14 figs., 4 tabs.« less
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
2012-01-01
Background The Achilles tendon Total Rupture Score was developed by a research group in 2007 in response to the need for a patient reported outcome measure for this patient population. Beyond this original development paper, no further validation studies have been published. Consequently the purpose of this study was to evaluate internal consistency, convergent validity and responsiveness of this newly developed patient reported outcome measure within patients who have sustained an isolated acute Achilles tendon rupture. Methods Sixty-four eligible patients with an acute rupture of their Achilles tendon completed the Achilles tendon Total Rupture Score alongside two further patient reported outcome measures (Disability Rating Index and EQ 5D). These were completed at baseline, six weeks, three months, six months and nine months post injury. The Achilles tendon Total Rupture Score was evaluated for internal consistency, using Cronbach's alpha, convergent validity, through correlation analysis and responsiveness, by analysing floor and ceiling effects and calculating its relative efficiency in comparison to the Disability Rating Index and EQ 5D scores. Results The Achilles tendon Total Rupture Score demonstrated high internal consistency (Cronbachs alpha > 0.8) and correlated significantly (p < 0.001) with the Disability Rating Index at five time points (pre-injury, six weeks, three, six and nine months) with correlation coefficients between -0.5 and -0.9. However, the confidence intervals were wide. Furthermore, the ability of the new score to detect clinically important changes over time (responsiveness) was shown to be greater than the Disability Rating Index and EQ 5D. Conclusions A universally accepted outcome measure is imperative to allow comparisons to be made across practice. This is the first study to evaluate aspects of validity of this newly developed outcome measure, outside of the developing centre. The ATRS demonstrated high internal consistency and responsiveness, with limited convergent validity. This research provides further support for the use of this outcome measure, however further research is required to advocate its universal use in patients with acute Achilles tendon ruptures. Such areas include inter-rater reliability and research to determine the minimally clinically important difference between scores. All authors have read and concur with the content of this manuscript. The material presented has not been and will not be submitted for publication elsewhere, except as an abstract. All authors have made substantial contributions to all of the following: (1) the conception and design of the study, or acquisition of data, or analysis and interpretation of data, (2) drafting the article or revising it critically for important intellectual content and (3) final approval of the submitted version. This research has been funded by Arthritis Research UK, no conflicts of interests have been declared by the authors. Kind Regards Rebecca Kearney (corresponding author) Research Physiotherapist PMID:22376047
Supervised self-organization of homogeneous swarms using ergodic projections of Markov chains.
Chattopadhyay, Ishanu; Ray, Asok
2009-12-01
This paper formulates a self-organization algorithm to address the problem of global behavior supervision in engineered swarms of arbitrarily large population sizes. The swarms considered in this paper are assumed to be homogeneous collections of independent identical finite-state agents, each of which is modeled by an irreducible finite Markov chain. The proposed algorithm computes the necessary perturbations in the local agents' behavior, which guarantees convergence to the desired observed state of the swarm. The ergodicity property of the swarm, which is induced as a result of the irreducibility of the agent models, implies that while the local behavior of the agents converges to the desired behavior only in the time average, the overall swarm behavior converges to the specification and stays there at all times. A simulation example illustrates the underlying concept.
Resolvent estimates in homogenisation of periodic problems of fractional elasticity
NASA Astrophysics Data System (ADS)
Cherednichenko, Kirill; Waurick, Marcus
2018-03-01
We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Are there ergodic limits to evolution? Ergodic exploration of genome space and convergence
McLeish, Tom C. B.
2015-01-01
We examine the analogy between evolutionary dynamics and statistical mechanics to include the fundamental question of ergodicity—the representative exploration of the space of possible states (in the case of evolution this is genome space). Several properties of evolutionary dynamics are identified that allow a generalization of the ergodic dynamics, familiar in dynamical systems theory, to evolution. Two classes of evolved biological structure then arise, differentiated by the qualitative duration of their evolutionary time scales. The first class has an ergodicity time scale (the time required for representative genome exploration) longer than available evolutionary time, and has incompletely explored the genotypic and phenotypic space of its possibilities. This case generates no expectation of convergence to an optimal phenotype or possibility of its prediction. The second, more interesting, class exhibits an evolutionary form of ergodicity—essentially all of the structural space within the constraints of slower evolutionary variables have been sampled; the ergodicity time scale for the system evolution is less than the evolutionary time. In this case, some convergence towards similar optima may be expected for equivalent systems in different species where both possess ergodic evolutionary dynamics. When the fitness maximum is set by physical, rather than co-evolved, constraints, it is additionally possible to make predictions of some properties of the evolved structures and systems. We propose four structures that emerge from evolution within genotypes whose fitness is induced from their phenotypes. Together, these result in an exponential speeding up of evolution, when compared with complete exploration of genomic space. We illustrate a possible case of application and a prediction of convergence together with attaining a physical fitness optimum in the case of invertebrate compound eye resolution. PMID:26640648
Are there ergodic limits to evolution? Ergodic exploration of genome space and convergence.
McLeish, Tom C B
2015-12-06
We examine the analogy between evolutionary dynamics and statistical mechanics to include the fundamental question of ergodicity-the representative exploration of the space of possible states (in the case of evolution this is genome space). Several properties of evolutionary dynamics are identified that allow a generalization of the ergodic dynamics, familiar in dynamical systems theory, to evolution. Two classes of evolved biological structure then arise, differentiated by the qualitative duration of their evolutionary time scales. The first class has an ergodicity time scale (the time required for representative genome exploration) longer than available evolutionary time, and has incompletely explored the genotypic and phenotypic space of its possibilities. This case generates no expectation of convergence to an optimal phenotype or possibility of its prediction. The second, more interesting, class exhibits an evolutionary form of ergodicity-essentially all of the structural space within the constraints of slower evolutionary variables have been sampled; the ergodicity time scale for the system evolution is less than the evolutionary time. In this case, some convergence towards similar optima may be expected for equivalent systems in different species where both possess ergodic evolutionary dynamics. When the fitness maximum is set by physical, rather than co-evolved, constraints, it is additionally possible to make predictions of some properties of the evolved structures and systems. We propose four structures that emerge from evolution within genotypes whose fitness is induced from their phenotypes. Together, these result in an exponential speeding up of evolution, when compared with complete exploration of genomic space. We illustrate a possible case of application and a prediction of convergence together with attaining a physical fitness optimum in the case of invertebrate compound eye resolution.
Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation
NASA Astrophysics Data System (ADS)
Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.
2010-02-01
Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.
A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems
Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing
2012-01-01
An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849
2017-06-01
SCHOOL OF ADVANCED AIR AND SPACE STUDIES AIR UNIVERSITY MAXWELL AIR FORCE BASE, ALABAMA JUNE 2017 DISTRIBUTION A: Approved for public ...Each case study demonstrates how convergence and divergence is heavily influenced by public support and political will. Public support—a population’s...varies within each case study and takes cues from public support. The author concludes by illustrating how legislation had a minimal role in
A fast multigrid-based electromagnetic eigensolver for curved metal boundaries on the Yee mesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: carl.bauer@colorado.edu; Werner, Gregory R.; Cary, John R.
For embedded boundary electromagnetics using the Dey–Mittra (Dey and Mittra, 1997) [1] algorithm, a special grad–div matrix constructed in this work allows use of multigrid methods for efficient inversion of Maxwell’s curl–curl matrix. Efficient curl–curl inversions are demonstrated within a shift-and-invert Krylov-subspace eigensolver (open-sourced at ([ofortt]https://github.com/bauerca/maxwell[cfortt])) on the spherical cavity and the 9-cell TESLA superconducting accelerator cavity. The accuracy of the Dey–Mittra algorithm is also examined: frequencies converge with second-order error, and surface fields are found to converge with nearly second-order error. In agreement with previous work (Nieter et al., 2009) [2], neglecting some boundary-cut cell faces (as is requiredmore » in the time domain for numerical stability) reduces frequency convergence to first-order and surface-field convergence to zeroth-order (i.e. surface fields do not converge). Additionally and importantly, neglecting faces can reduce accuracy by an order of magnitude at low resolutions.« less
Moisture convergence using satellite-derived wind fields - A severe local storm case study
NASA Technical Reports Server (NTRS)
Negri, A. J.; Vonder Haar, T. H.
1980-01-01
Five-minute interval 1-km resolution SMS visible channel data were used to derive low-level wind fields by tracking small cumulus clouds on NASA's Atmospheric and Oceanographic Information Processing System. The satellite-derived wind fields were combined with surface mixing ratios to derive horizontal moisture convergence in the prestorm environment of April 24, 1975. Storms began developing in an area extending from southwest Oklahoma to eastern Tennessee 2 h subsequent to the time of the derived fields. The maximum moisture convergence was computed to be 0.0022 g/kg per sec and areas of low-level convergence of moisture were in general indicative of regions of severe storm genesis. The resultant moisture convergence fields derived from two wind sets 20 min apart were spatially consistent and reflected the mesoscale forcing of ensuing storm development. Results are discussed with regard to possible limitations in quantifying the relationship between low-level flow and between low-level flow and satellite-derived cumulus motion in an antecedent storm environment.
On convergence of solutions to variational-hemivariational inequalities
NASA Astrophysics Data System (ADS)
Zeng, Biao; Liu, Zhenhai; Migórski, Stanisław
2018-06-01
In this paper we investigate the convergence behavior of the solutions to the time-dependent variational-hemivariational inequalities with respect to the data. First, we give an existence and uniqueness result for the problem, and then, deliver a continuous dependence result when all the data are subjected to perturbations. A semipermeability problem is given to illustrate our main results.
International trends in forest products consumption: is there convergence?
Joseph Buongiorno
2009-01-01
International data from 1961 to 2005 showed that the coefficient of variation of consumption per- capita across countries had tended to decrease over time for all forest products except sawnwood. This convergence of per-capita consumption was confirmed by the trends in Theil's inequality coefficients: the distribution of forest products consumption across...
Evaluation results for intelligent transportation systems
DOT National Transportation Integrated Search
2000-11-09
This presentation covers the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is bei...
Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.
2016-11-27
Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.
Converging prescription brand shares as evidence of physician learning.
Walker, Doug
2012-01-01
Within a drug category, there is an optimal brand the physician could choose to prescribe based on the patient's particular condition and characteristics. Physicians desire to prescribe the best brand for each patient for professional, moral, and legal reasons. Ideally, detailing provides information that supports this effort. This study finds that, over time, the proportion of prescriptions written for each brand moves toward a stable distribution--a convergence in which each brand's share in the category appears to match the proportion of prescription writing opportunities where the brand is the best choice for the patient. Detailing supports this convergence.
NASA Astrophysics Data System (ADS)
Xu, Jian-Feng; Luo, Yan-An; Li, Lei; Peng, Guang-Xiong
The properties of dense quark matter are investigated in the perturbation theory with a rapidly convergent matching-invariant running coupling. The fast convergence is mainly due to the resummation of an infinite number of known logarithmic terms in a compact form. The only parameter in this model, the ratio of the renormalization subtraction point to the chemical potential, is restricted to be about 2.64 according to the Witten-Bodmer conjecture, which gives the maximum mass and the biggest radius of quark stars to be, respectively, two times the solar mass and 11.7km.
Distinguishing time-delayed causal interactions using convergent cross mapping
Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George
2015-01-01
An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
Evaluation of Turkish and Mathematics Curricula According to Value-Based Evaluation Model
ERIC Educational Resources Information Center
Duman, Serap Nur; Akbas, Oktay
2017-01-01
This study evaluated secondary school seventh-grade Turkish and mathematics programs using the Context-Input-Process-Product Evaluation Model based on student, teacher, and inspector views. The convergent parallel mixed method design was used in the study. Student values were identified using the scales for socio-level identification, traditional…
Clemens, Sheila M; Gailey, Robert S; Bennett, Christopher L; Pasquina, Paul F; Kirk-Sanchez, Neva J; Gaunaurd, Ignacio A
2018-03-01
Using a custom mobile application to evaluate the reliability and validity of the Component Timed-Up-and-Go test to assess prosthetic mobility in people with lower limb amputation. Cross-sectional design. National conference for people with limb loss. A total of 118 people with non-vascular cause of lower limb amputation participated. Subjects had a mean age of 48 (±13.7) years and were an average of 10 years post amputation. Of them, 54% ( n = 64) of subjects were male. None. The Component Timed-Up-and-Go was administered using a mobile iPad application, generating a total time to complete the test and five component times capturing each subtask (sit to stand transitions, linear gait, turning) of the standard timed-up-and-go test. The outcome underwent test-retest reliability using intraclass correlation coefficients (ICCs) and convergent validity analyses through correlation with self-report measures of balance and mobility. The Component Timed-Up-and-Go exhibited excellent test-retest reliability with ICCs ranging from .98 to .86 for total and component times. Evidence of discriminative validity resulted from significant differences in mean total times between people with transtibial (10.1 (SD: ±2.3)) and transfemoral (12.76 (SD: ±5.1) amputation, as well as significant differences in all five component times ( P < .05). Convergent validity of the Component Timed-Up-and-Go was demonstrated through moderate correlations with the PLUS-M ( r s = -.56). The Component Timed-Up-and-Go is a reliable and valid clinical tool for detailed assessment of prosthetic mobility in people with non-vascular lower limb amputation. The iPad application provided a means to easily record data, contributing to clinical utility.
Hurtado, Esteban; Haye, Andrés; González, Ramiro; Manes, Facundo; Ibáñez, Agustiń
2009-06-26
Several event related potential (ERP) studies have investigated the time course of different aspects of evaluative processing in social bias research. Various reports suggest that the late positive potential (LPP) is modulated by basic evaluative processes, and some reports suggest that in-/outgroup relative position affects ERP responses. In order to study possible LPP blending between facial race processing and semantic valence (positive or negative words), we recorded ERPs while indigenous and non-indigenous participants who were matched by age and gender performed an implicit association test (IAT). The task involved categorizing faces (ingroup and outgroup) and words (positive and negative). Since our paradigm implies an evaluative task with positive and negative valence association, a frontal distribution of LPPs similar to that found in previous reports was expected. At the same time, we predicted that LPP valence lateralization would be modulated not only by positive/negative associations but also by particular combinations of valence, face stimuli and participant relative position. Results showed that, during an IAT, indigenous participants with greater behavioral ingroup bias displayed a frontal LPP that was modulated in terms of complex contextual associations involving ethnic group and valence. The LPP was lateralized to the right for negative valence stimuli and to the left for positive valence stimuli. This valence lateralization was influenced by the combination of valence and membership type relevant to compatibility with prejudice toward a minority. Behavioral data from the IAT and an explicit attitudes questionnaire were used to clarify this finding and showed that ingroup bias plays an important role. Both ingroup favoritism and indigenous/non-indigenous differences were consistently present in the data. Our results suggest that frontal LPP is elicited by contextual blending of evaluative judgments of in-/outgroup information and positive vs. negative valence association and confirm recent research relating in-/outgroup ERP modulation and frontal LPP. LPP modulation may cohere with implicit measures of attitudes. The convergence of measures that were observed supports the idea that racial and valence evaluations are strongly influenced by context. This result adds to a growing set of evidence concerning contextual sensitivity of different measures of prejudice.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
Barth, Amy E.; Stuebing, Karla K.; Fletcher, Jack M.; Cirino, Paul T.; Romain, Melissa; Francis, David; Vaughn, Sharon
2012-01-01
We evaluated the reliability and validity of two oral reading fluency scores for one-minute equated passages: median score and mean score. These scores were calculated from measures of reading fluency administered up to five times over the school year to students in grades 6–8 (n = 1,317). Both scores were highly reliable with strong convergent validity for adequately developing and struggling middle grade readers. These results support the use of either the median or mean score for oral reading fluency assessments for middle grade readers. PMID:23087532
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Safe Onboard Guidance and Control Under Probabilistic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars James
2011-01-01
An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.
NASA Technical Reports Server (NTRS)
Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)
2002-01-01
The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.
Sriram, Vinay K; Montgomery, Doug
2017-07-01
The Internet is subject to attacks due to vulnerabilities in its routing protocols. One proposed approach to attain greater security is to cryptographically protect network reachability announcements exchanged between Border Gateway Protocol (BGP) routers. This study proposes and evaluates the performance and efficiency of various optimization algorithms for validation of digitally signed BGP updates. In particular, this investigation focuses on the BGPSEC (BGP with SECurity extensions) protocol, currently under consideration for standardization in the Internet Engineering Task Force. We analyze three basic BGPSEC update processing algorithms: Unoptimized, Cache Common Segments (CCS) optimization, and Best Path Only (BPO) optimization. We further propose and study cache management schemes to be used in conjunction with the CCS and BPO algorithms. The performance metrics used in the analyses are: (1) routing table convergence time after BGPSEC peering reset or router reboot events and (2) peak-second signature verification workload. Both analytical modeling and detailed trace-driven simulation were performed. Results show that the BPO algorithm is 330% to 628% faster than the unoptimized algorithm for routing table convergence in a typical Internet core-facing provider edge router.
Essentially nonoscillatory postprocessing filtering methods
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1992-01-01
High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
Evaluation results for intelligent transport systems (ITS) : abstract
DOT National Transportation Integrated Search
2000-11-09
This paper summarizes the methods of evaluation set out for EC-funded ITS research and demonstration projects, known as the CONVERGE validation quality process and the lessons learned from that approach. The new approach to appraisal, which is being ...
Convergence speeding up in the calculation of the viscous flow about an airfoil
NASA Technical Reports Server (NTRS)
Radespiel, R.; Rossow, C.
1988-01-01
A finite volume method to solve the three dimensional Navier-Stokes equations was developed. It is based on a cell-vertex scheme with central differences and explicit Runge-Kutta time steps. A good convergence for a stationary solution was obtained by the use of local time steps, implicit smoothing of the residues, a multigrid algorithm, and a carefully controlled artificial dissipative term. The method is illustrated by results for transonic profiles and airfoils. The method allows a routine solution of the Navier-Stokes equations.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
A Mixed-Methods Longitudinal Evaluation of a One-Day Mental Health Wellness Intervention
ERIC Educational Resources Information Center
Doyle, Louise; de Vries, Jan; Higgins, Agnes; Keogh, Brian; McBennett, Padraig; O'Shea, Marié T.
2017-01-01
Objectives: This study evaluated the impact of a one-day mental health Wellness Workshop on participants' mental health and attitudes towards mental health. Design: Convergent, longitudinal mixed-methods approach. Setting: The study evaluated Wellness Workshops which took place throughout the Republic of Ireland. Method: Questionnaires measuring…
Simulation and Analysis of Converging Shock Wave Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less
On Some Parabolic Type Problems from Thin Film Theory and Chemical Reaction-Diffusion Networks
NASA Astrophysics Data System (ADS)
Mohamed, Fatma Naser Ali
This dissertation considers some parabolic type problems from thin film theory and chemical reaction-diffusion networks. The dissertation consists of two parts: In the first part, we study the evolution of a thin film of fluid modeled by the lubrication approximation for thin viscous films. We prove an existence of (dissipative) strong solutions for the Cauchy problem when the sub-diffusive exponent ranges between 3/8 and 2; then we show that these solutions tend to zero at rates matching the decay of the source-type self-similar solutions with zero contact angle. We introduce the weaker concept of dissipative mild solutions and we show that, in this case, the surface-tension energy dissipation is the mechanism responsible for the H1-norm decay to zero of the thickness of the film at an explicit rate. Relaxed problems, with second-order nonlinear terms of porous media type, are also successfully treated by the same means. [special characters omitted]. In the second part, we are concerned with the convergence of a certain space-discretization scheme -the so-called method of lines- for mass-action reaction-diffusion systems. First, we start with a toy model, namely. [special characters omitted]. and prove convergence of method of lines for this linear case. Here weak convergence in L2(0,1) is enough to prove convergence of the method of lines. Then we adopt the framework for convergence analysis introduced in [23] and concentrate on the proof-of-concept reaction. within 1D space, while at the same time noting that our techniques are readily generalizable to other reaction-diffusion networks and to more than one space dimension. Indeed, it will be obvious how to extend our proofs to the multi-dimensional case; we only note that the proof of the comparison principle (the continuous and the discrete versions; see chapter 6) imposes a limitation on the spatial dimension (should be at most five; see [24] for details). The Method of Lines (MOL) is not a mainstream numerical tool and the specialized literature is rather scarce. The method amounts to discretizing evolutionary PDE's in space only, so it produces a semi-discrete numerical scheme which consists of a system of ODE's (in the time variable). To prove convergence of the semi-discrete MOL scheme to the original PDE one needs to perform some more or less traditional analysis: it is necessary to show that the scheme is consistent with the continuous problem and that the discretized version of the spatial differential operator retains sufficient dissipative properties in order to allow an application of Gronwall's Lemma to the error term. As shown in [23], a uniform (in time) consistency estimate is sufficient to obtain convergence; however, the consistency estimate we proved is not uniform for a small time, so we cannot directly employ the results in [23] to prove convergence in our case. Instead, we prove all the required estimates "from the scratch", then we use their exact quantitative form in order to conclude convergence.
Precise orbit determination and rapid orbit recovery supported by time synchronization
NASA Astrophysics Data System (ADS)
Guo, Rui; Zhou, JianHua; Hu, XiaoGong; Liu, Li; Tang, Bo; Li, XiaoJie; Wu, Shan
2015-06-01
In order to maintain optimal signal coverage, GNSS satellites have to experience orbital maneuvers. For China's COMPASS system, precise orbit determination (POD) as well as rapid orbit recovery after maneuvers contribute to the overall Positioning, Navigation and Timing (PNT) service performance in terms of accuracy and availability. However, strong statistical correlations between clock offsets and the radial component of a satellite's positions require long data arcs for POD to converge. We propose here a new strategy which relies on time synchronization between ground tracking stations and in-orbit satellites. By fixing satellite clock offsets measured by the satellite station two-way synchronization (SSTS) systems and receiver clock offsets, POD and orbital recovery performance can be improved significantly. Using the Satellite Laser Ranging (SLR) as orbital accuracy evaluation, we find the 4-hr recovered orbit achieves about 0.71 m residual root mean square (RMS) error of fit SLR data, the recovery time is improved from 24-hr to 4-hr compared with the conventional POD without time synchronization support. In addition, SLR evaluation shows that for 1-hr prediction, about 1.47 m accuracy is achieved with the new proposed POD strategy.
NASA Astrophysics Data System (ADS)
Poole, Gregory B.; Mutch, Simon J.; Croton, Darren J.; Wyithe, Stuart
2017-12-01
We introduce GBPTREES: an algorithm for constructing merger trees from cosmological simulations, designed to identify and correct for pathological cases introduced by errors or ambiguities in the halo finding process. GBPTREES is built upon a halo matching method utilizing pseudo-radial moments constructed from radially sorted particle ID lists (no other information is required) and a scheme for classifying merger tree pathologies from networks of matches made to-and-from haloes across snapshots ranging forward-and-backward in time. Focusing on SUBFIND catalogues for this work, a sweep of parameters influencing our merger tree construction yields the optimal snapshot cadence and scanning range required for converged results. Pathologies proliferate when snapshots are spaced by ≲0.128 dynamical times; conveniently similar to that needed for convergence of semi-analytical modelling, as established by Benson et al. Total merger counts are converged at the level of ∼5 per cent for friends-of-friends (FoF) haloes of size np ≳ 75 across a factor of 512 in mass resolution, but substructure rates converge more slowly with mass resolution, reaching convergence of ∼10 per cent for np ≳ 100 and particle mass mp ≲ 109 M⊙. We present analytic fits to FoF and substructure merger rates across nearly all observed galactic history (z ≤ 8.5). While we find good agreement with the results presented by Fakhouri et al. for FoF haloes, a slightly flatter dependence on merger ratio and increased major merger rates are found, reducing previously reported discrepancies with extended Press-Schechter estimates. When appropriately defined, substructure merger rates show a similar mass ratio dependence as FoF rates, but with stronger mass and redshift dependencies for their normalization.
Binocular Vision in Chronic Fatigue Syndrome.
Godts, Daisy; Moorkens, Greta; Mathysen, Danny G P
2016-01-01
To compare binocular vision measurements between Chronic Fatigue Syndrome (CFS) patients and healthy controls. Forty-one CFS patients referred by the Reference Centre for Chronic Fatigue Syndrome of the Antwerp University Hospital and forty-one healthy volunteers, matched for age and gender, underwent a complete orthoptic examination. Data of visual acuity, eye position, fusion amplitude, stereopsis, ocular motility, convergence, and accommodation were compared between both groups. Patients with CFS showed highly significant smaller fusion amplitudes (P < 0.001), reduced convergence capacity (P < 0.001), and a smaller accommodation range (P < 0.001) compared to the control group. In patients with CFS binocular vision, convergence and accommodation should be routinely examined. CFS patients will benefit from reading glasses either with or without prism correction in an earlier stage compared to their healthy peers. Convergence exercises may be beneficial for CFS patients, despite the fact that they might be very tiring. Further research will be necessary to draw conclusions about the efficacy of treatment, especially regarding convergence exercises. To our knowledge, this is the first prospective study evaluating binocular vision in CFS patients. © 2016 Board of regents of the University of Wisconsin System, American Orthoptic Journal, Volume 66, 2016, ISSN 0065-955X, E-ISSN 1553-4448.
Nonaka, Fumitaka; Hasebe, Satoshi; Ohtsuki, Hiroshi
2004-01-01
To evaluate the convergence accommodation to convergence (CA/C) ratio in strabismic patients and to clarify its clinical implications. Seventy-eight consecutive patients (mean age: 12.9 +/- 6.0 years) with intermittent exotropia and decompensated exophoria who showed binocular fusion at least at near viewing were recruited. The CA/C ratio was estimated by measuring accommodative responses induced by horizontal prisms with different magnitudes under accommodation feedback open-loop conditions. The CA/C ratios were compared with accommodative convergence to accommodation (AC/A) ratios and other clinical parameters. A linear regression analysis indicated that the mean (+/-SD) CA/C ratio was 0.080 +/- 0.043 D/prism diopter or 0.48 +/- 0.26 D/meter angle. There was no inverse or reciprocal relationship between CA/C and AC/A ratios. The patients with lower CA/C ratios tended to have smaller tonic accommodation under binocular viewing conditions and larger exodeviation at near viewing. The CA/C ratio, like the AC/A ratio, is an independent parameter that characterizes clinical features. A lower CA/C may be beneficial for the vergence control system to compensate for ocular misalignment with minimum degradation of accommodation accuracy.
Kim, DeokJu; Yang, YeongAe
2016-03-01
[Purpose] This study investigates the effects of welfare IT convergence contents on physical function, depression, and social participation among the elderly. It also aims to provide material for future activity mediation for the elderly. [Subjects] Two hundred subjects >65 years were selected from six elderly welfare facilities and related institutions in the Busan and Gyeongbuk areas and were evaluated from 2014 to 2015. [Methods] This study assessed physical function, depression, and social participation; 100 subjects who utilized commercialized welfare IT convergence contents were included in an experimental group and 100 subjects who had no experience thereof were included in a control group. [Results] When comparing differences in physical function between the groups, balance maintenance was better in the experimental group. There were also significant differences in depression and social participation. The experimental group displayed higher physical function, lower depression levels, and higher social participation levels compared to the control group. [Conclusion] Welfare IT convergence contents positively influence occupational performance in the elderly. Future research is necessary to provide information to the elderly through various routes, so that they can understand welfare IT convergence contents and actively utilize them.
Moen, Daniel S; Morlon, Hélène; Wiens, John J
2016-01-01
Striking evolutionary convergence can lead to similar sets of species in different locations, such as in cichlid fishes and Anolis lizards, and suggests that evolution can be repeatable and predictable across clades. Yet, most examples of convergence involve relatively small temporal and/or spatial scales. Some authors have speculated that at larger scales (e.g., across continents), differing evolutionary histories will prevent convergence. However, few studies have compared the contrasting roles of convergence and history, and none have done so at large scales. Here we develop a two-part approach to test the scale over which convergence can occur, comparing the relative importance of convergence and history in macroevolution using phylogenetic models of adaptive evolution. We apply this approach to data from morphology, ecology, and phylogeny from 167 species of anuran amphibians (frogs) from 10 local sites across the world, spanning ~160 myr of evolution. Mapping ecology on the phylogeny revealed that similar microhabitat specialists (e.g., aquatic, arboreal) have evolved repeatedly across clades and regions, producing many evolutionary replicates for testing for morphological convergence. By comparing morphological optima for clades and microhabitat types (our first test), we find that convergence associated with microhabitat use dominates frog morphological evolution, producing recurrent ecomorphs that together encompass all sampled species in each community in each region. However, our second test, which examines whether and how much species differ from their inferred optima, shows that convergence is incomplete: that is, phenotypes of most species are still somewhat distant from the estimated optimum for each microhabitat, seemingly because of insufficient time for more complete adaptation (an effect of history). Yet, these effects of history are related to past ecologies, and not clade membership. Overall, our study elucidates the dominant drivers of morphological evolution across a major vertebrate clade and shows that evolution can be repeatable at much greater temporal and spatial scales than commonly thought. It also provides an analytical framework for testing other potential examples of large-scale convergence. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Shinde, Satomi K.; Danov, Stacy; Chen, Chin-Chih; Clary, Jamie; Harper, Vicki; Bodfish, James W.; Symons, Frank J.
2014-01-01
Objectives The main aim of the study was to generate initial convergent validity evidence for the Pain and Discomfort Scale (PADS) for use with non-verbal adults with intellectual disabilities (ID). Methods Forty-four adults with intellectual disability (mean age = 46, 52 % male) were evaluated using a standardized sham-controlled and blinded sensory testing protocol, from which FACS and PADS scores were tested for (1) sensitivity to an array of calibrated sensory stimuli, (2) specificity (active vs. sham trials), and (3) concordance. Results The primary findings were that participants were reliably coded using both FACS and PADS approaches as being reactive to the sensory stimuli (FACS: F[2, 86] = 4.71, P < .05, PADS: F[2, 86] = 21.49, P < .05) (sensitivity evidence), not reactive during the sham stimulus trials (FACS: F[1, 43]= 3.77, p = .06, PADS: F[1, 43] = 5.87, p = .02) (specificity evidence), and there were significant (r = .41 – .51, p < .01) correlations between PADS and FACS (convergent validity evidence). Discussion FACS is an objective coding platform for facial expression. It requires intensive training and resources for scoring. As such it may be limited for clinical application. PADS was designed for clinical application. PADS scores were comparable to FACS scores under controlled evaluation conditions providing partial convergent validity evidence for its use. PMID:24135902
Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J
1995-08-01
The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.
Convergence of Transition Probability Matrix in CLVMarkov Models
NASA Astrophysics Data System (ADS)
Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.
2018-04-01
A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2017-07-01
Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.
A snapshot attractor view of the advection of inertial particles in the presence of history force
NASA Astrophysics Data System (ADS)
Guseva, Ksenia; Daitche, Anton; Tél, Tamás
2017-06-01
We analyse the effect of the Basset history force on the sedimentation or rising of inertial particles in a two-dimensional convection flow. We find that the concept of snapshot attractors is useful to understand the extraordinary slow convergence due to long-term memory: an ensemble of particles converges exponentially fast towards a snapshot attractor, and this attractor undergoes a slow drift for long times. We demonstrate for the case of a periodic attractor that the drift of the snapshot attractor can be well characterized both in the space of the fluid and in the velocity space. For the case of quasiperiodic and chaotic dynamics we propose the use of the average settling velocity of the ensemble as a distinctive measure to characterize the snapshot attractor and the time scale separation corresponding to the convergence towards the snapshot attractor and its own slow dynamics.
Discrete conservation laws and the convergence of long time simulations of the mkdv equation
NASA Astrophysics Data System (ADS)
Gorria, C.; Alejo, M. A.; Vega, L.
2013-02-01
Pseudospectral collocation methods and finite difference methods have been used for approximating an important family of soliton like solutions of the mKdV equation. These solutions present a structural instability which make difficult to approximate their evolution in long time intervals with enough accuracy. The standard numerical methods do not guarantee the convergence to the proper solution of the initial value problem and often fail by approaching solutions associated to different initial conditions. In this frame the numerical schemes that preserve the discrete invariants related to some conservation laws of this equation produce better results than the methods which only take care of a high consistency order. Pseudospectral spatial discretization appear as the most robust of the numerical methods, but finite difference schemes are useful in order to analyze the rule played by the conservation of the invariants in the convergence.
NASA Astrophysics Data System (ADS)
Chang, En-Chih
2018-02-01
This paper presents a high-performance AC power source by applying robust stability control technology for precision material machining (PMM). The proposed technology associates the benefits of finite-time convergent sliding function (FTCSF) and firefly optimization algorithm (FOA). The FTCSF maintains the robustness of conventional sliding mode, and simultaneously speeds up the convergence speed of the system state. Unfortunately, when a highly nonlinear loading is applied, the chatter will occur. The chatter results in high total harmonic distortion (THD) output voltage of AC power source, and even deteriorates the stability of PMM. The FOA is therefore used to remove the chatter, and the FTCSF still preserves finite system-state convergence time. By combining FTCSF with FOA, the AC power source of PMM can yield good steady-state and transient performance. Experimental results are performed in support of the proposed technology.
Annealing Ant Colony Optimization with Mutation Operator for Solving TSP.
Mohsen, Abdulqader M
2016-01-01
Ant Colony Optimization (ACO) has been successfully applied to solve a wide range of combinatorial optimization problems such as minimum spanning tree, traveling salesman problem, and quadratic assignment problem. Basic ACO has drawbacks of trapping into local minimum and low convergence rate. Simulated annealing (SA) and mutation operator have the jumping ability and global convergence; and local search has the ability to speed up the convergence. Therefore, this paper proposed a hybrid ACO algorithm integrating the advantages of ACO, SA, mutation operator, and local search procedure to solve the traveling salesman problem. The core of algorithm is based on the ACO. SA and mutation operator were used to increase the ants population diversity from time to time and the local search was used to exploit the current search area efficiently. The comparative experiments, using 24 TSP instances from TSPLIB, show that the proposed algorithm outperformed some well-known algorithms in the literature in terms of solution quality.
Analysis and optimisation of the convergence behaviour of the single channel digital tanlock loop
NASA Astrophysics Data System (ADS)
Al-Kharji Al-Ali, Omar; Anani, Nader; Al-Araji, Saleh; Al-Qutayri, Mahmoud
2013-09-01
The mathematical analysis of the convergence behaviour of the first-order single channel digital tanlock loop (SC-DTL) is presented. This article also describes a novel technique that allows controlling the convergence speed of the loop, i.e. the time taken by the phase-error to reach its steady-state value, by using a specialised controller unit. The controller is used to adjust the convergence speed so as to selectively optimise a given performance parameter of the loop. For instance, the controller may be used to speed up the convergence in order to increase the lock range and improve the acquisition speed. However, since increasing the lock range can degrade the noise immunity of the system, in a noisy environment the controller can slow down the convergence speed until locking is achieved. Once the system is in lock, the convergence speed can be increased to improve the acquisition speed. The performance of the SC-DTL system was assessed against similar arctan-based loops and the results demonstrate the success of the controller in optimising the performance of the SC-DTL loop. The results of the system testing using MATLAB/Simulink simulation are presented. A prototype of the proposed system was implemented using a field programmable gate array module and the practical results are in good agreement with those obtained by simulation.
On adaptive learning rate that guarantees convergence in feedforward networks.
Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan
2006-09-01
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Kirouac, Megan; Stein, Elizabeth R; Pearson, Matthew R; Witkiewitz, Katie
2017-11-01
Quality of life is an outcome often examined in treatment research contexts such as biomedical trials, but has been studied less often in alcohol use disorder (AUD) treatment. The importance of considering QoL in substance use treatment research has recently been voiced, and measures of QoL have been administered in large AUD treatment trials. Yet, the viability of popular QoL measures has never been evaluated in AUD treatment samples. Accordingly, the present manuscript describes a psychometric examination of and prospective changes in the World Health Organization Quality of Life measure (WHOQOL-BREF) in a large sample (N = 1383) of patients with AUD recruited for the COMBINE Study. Specifically, we examined the construct validity (via confirmatory factor analyses), measurement invariance across time, internal consistency reliability, convergent validity, and effect sizes of post-treatment changes in the WHOQOL-BREF. Confirmatory factor analyses of the WHOQOL-BREF provided acceptable fit to the current data and this model was invariant across time. Internal consistency reliability was excellent (α > .9) for the full WHOQOL-BREF for each timepoint; the WHOQOL-BREF had good convergent validity, and medium effect size improvements were found in the full COMBINE sample across time. These findings suggest that the WHOQOL-BREF is an appropriate measure to use in samples with AUD, that the WHOQOL-BREF scores may be examined over time (e.g., from pre- to post-treatment), and the WHOQOL-BREF may be used to assess improvements in quality of life in AUD research.
Constructive Convergence: Imagery and Humanitarian Assistance
2012-02-01
PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...a dataset different projection , a transformation must be performed on the data that warps the original data into the new projection . Every time data...i Constructive Convergence: Imagery and Humanitarian Assistance Doug Hanchard Center for Technology and National
Particle number dependence in the non-linear evolution of N-body self-gravitating systems
NASA Astrophysics Data System (ADS)
Benhaiem, D.; Joyce, M.; Sylos Labini, F.; Worrakitpoonpon, T.
2018-01-01
Simulations of purely self-gravitating N-body systems are often used in astrophysics and cosmology to study the collisionless limit of such systems. Their results for macroscopic quantities should then converge well for sufficiently large N. Using a study of the evolution from a simple space of spherical initial conditions - including a region characterized by so-called 'radial orbit instability' - we illustrate that the values of N at which such convergence is obtained can vary enormously. In the family of initial conditions we study, good convergence can be obtained up to a few dynamical times with N ∼ 103 - just large enough to suppress two body relaxation - for certain initial conditions, while in other cases such convergence is not attained at this time even in our largest simulations with N ∼ 105. The qualitative difference is due to the stability properties of fluctuations introduced by the N-body discretisation, of which the initial amplitude depends on N. We discuss briefly why the crucial role which such fluctuations can potentially play in the evolution of the N body system could, in particular, constitute a serious problem in cosmological simulations of dark matter.
Error field optimization in DIII-D using extremum seeking control
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; ...
2016-06-03
A closed-loop error field control algorithm is implemented in the Plasma Control System of the DIII-D tokamak and used to identify optimal control currents during a single plasma discharge. The algorithm, based on established extremum seeking control theory, exploits the link in tokamaks between maximizing the toroidal angular momentum and minimizing deleterious non-axisymmetric magnetic fields. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coilmore » currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.« less
Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution. PMID:29186166
Li, Jianjun; Zhang, Rubo; Yang, Yu
2017-01-01
Research on distributed task planning model for multi-autonomous underwater vehicle (MAUV). A scroll time domain quantum artificial bee colony (STDQABC) optimization algorithm is proposed to solve the multi-AUV optimal task planning scheme. In the uncertain marine environment, the rolling time domain control technique is used to realize a numerical optimization in a narrowed time range. Rolling time domain control is one of the better task planning techniques, which can greatly reduce the computational workload and realize the tradeoff between AUV dynamics, environment and cost. Finally, a simulation experiment was performed to evaluate the distributed task planning performance of the scroll time domain quantum bee colony optimization algorithm. The simulation results demonstrate that the STDQABC algorithm converges faster than the QABC and ABC algorithms in terms of both iterations and running time. The STDQABC algorithm can effectively improve MAUV distributed tasking planning performance, complete the task goal and get the approximate optimal solution.
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
NASA Technical Reports Server (NTRS)
Yamamoto, K.; Brausch, J. F.; Balsa, T. F.; Janardan, B. A.; Knott, P. R.
1984-01-01
Seven single stream model nozzles were tested in the Anechoic Free-Jet Acoustic Test Facility to evaluate the effectiveness of convergent divergent (C-D) flowpaths in the reduction of shock-cell noise under both static and mulated flight conditions. The test nozzles included a baseline convergent circular nozzle, a C-D circular nozzle, a convergent annular plug nozzle, a C-D annular plug nozzle, a convergent multi-element suppressor plug nozzle, and a C-D multi-element suppressor plug nozzle. Diagnostic flow visualization with a shadowgraph and aerodynamic plume measurements with a laser velocimeter were performed with the test nozzles. A theory of shock-cell noise for annular plug nozzles with shock-cells in the vicinity of the plug was developed. The benefit of these C-D nozzles was observed over a broad range of pressure ratiosin the vicinity of their design conditions. At the C-D design condition, the C-D annual nozzle was found to be free of shock-cells on the plug.
Bruno, Oscar P.; Turc, Catalin; Venakides, Stephanos
2016-01-01
This work, part I in a two-part series, presents: (i) a simple and highly efficient algorithm for evaluation of quasi-periodic Green functions, as well as (ii) an associated boundary-integral equation method for the numerical solution of problems of scattering of waves by doubly periodic arrays of scatterers in three-dimensional space. Except for certain ‘Wood frequencies’ at which the quasi-periodic Green function ceases to exist, the proposed approach, which is based on smooth windowing functions, gives rise to tapered lattice sums which converge superalgebraically fast to the Green function—that is, faster than any power of the number of terms used. This is in sharp contrast to the extremely slow convergence exhibited by the lattice sums in the absence of smooth windowing. (The Wood-frequency problem is treated in part II.) This paper establishes rigorously the superalgebraic convergence of the windowed lattice sums. A variety of numerical results demonstrate the practical efficiency of the proposed approach. PMID:27493573
Monte Carlo criticality source convergence in a loosely coupled fuel storage system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blomquist, R. N.; Gelbard, E. M.
2003-06-10
The fission source convergence of a very loosely coupled array of 36 fuel subassemblies with slightly non-symmetric reflection is studied. The fission source converges very slowly from a uniform guess to the fundamental mode in which about 40% of the fissions occur in one corner subassembly. Eigenvalue and fission source estimates are analyzed using a set of statistical tests similar to those used in MCNP, including the ''drift-in-mean'' test and a new drift-in-mean test using a linear fit to the cumulative estimate drift, the Shapiro-Wilk test for normality, the relative error test, and the ''1/N'' test. The normality test doesmore » not detect a drifting eigenvalue or fission source. Applied to eigenvalue estimates, the other tests generally fail to detect an unconverged solution, but they are sometimes effective when evaluating fission source distributions. None of the test provides completely reliable indication of convergence, although they can detect nonconvergence.« less
de Vries, Liesbeth; van Hartingsveldt, Margo J; Cup, Edith H C; Nijhuis-van der Sanden, Maria W G; de Groot, Imelda J M
2015-06-01
When children are not ready to write, assessment of fine motor coordination may be indicated. The purpose of this study was to evaluate which fine motor test, the Nine-Hole Peg Test (9-HPT) or the newly developed Timed Test of In-Hand Manipulation (Timed-TIHM), correlates best with handwriting readiness as measured by the Writing Readiness Inventory Tool In Context-Task Performance (WRITIC-TP). From the 119 participating children, 43 were poor performers. Convergent validity of the 9-HPT and Timed-TIHM with WRITIC-TP was determined, and test-retest reliability of the Timed-TIHM was examined in 59 children. The results showed that correlations of the 9-HPT and Timed-TIHM with the WRITIC-TP were similar (rs = -0.40). The 9-HPT and the complex rotation subtask of the Timed-TIHM had a low correlation with the WRITIC-TP in poor performers (rs = -0.30 and -0.32 respectively). Test-retest reliability of the Timed-TIHM was significant (Intraclass Correlation Coefficient = 0.71). Neither of these two fine motor tests is appeared superior. They both relate to different aspects of fine motor performance. One of the limitations of the methodology was unequal numbers of children in subgroups. It is recommended that further research is indicated to evaluate the relation between development of fine motor coordination and handwriting proficiency, on the Timed-TIHM in different age groups. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Sitges, Marta; Jones, Michael; Shiota, Takahiro; Qin, Jian Xin; Tsujino, Hiroyuki; Bauer, Fabrice; Kim, Yong Jin; Agler, Deborah A.; Cardon, Lisa A.; Zetts, Arthur D.;
2003-01-01
BACKGROUND: Pitfalls of the flow convergence (FC) method, including 2-dimensional imaging of the 3-dimensional (3D) geometry of the FC surface, can lead to erroneous quantification of mitral regurgitation (MR). This limitation may be mitigated by the use of real-time 3D color Doppler echocardiography (CE). Our objective was to validate a real-time 3D navigation method for MR quantification. METHODS: In 12 sheep with surgically induced chronic MR, 37 different hemodynamic conditions were studied with real-time 3DCE. Using real-time 3D navigation, the radius of the largest hemispherical FC zone was located and measured. MR volume was quantified according to the FC method after observing the shape of FC in 3D space. Aortic and mitral electromagnetic flow probes and meters were balanced against each other to determine reference MR volume. As an initial clinical application study, 22 patients with chronic MR were also studied with this real-time 3DCE-FC method. Left ventricular (LV) outflow tract automated cardiac flow measurement (Toshiba Corp, Tokyo, Japan) and real-time 3D LV stroke volume were used to quantify the reference MR volume (MR volume = 3DLV stroke volume - automated cardiac flow measurement). RESULTS: In the sheep model, a good correlation and agreement was seen between MR volume by real-time 3DCE and electromagnetic (y = 0.77x + 1.48, r = 0.87, P <.001, delta = -0.91 +/- 2.65 mL). In patients, real-time 3DCE-derived MR volume also showed a good correlation and agreement with the reference method (y = 0.89x - 0.38, r = 0.93, P <.001, delta = -4.8 +/- 7.6 mL). CONCLUSIONS: real-time 3DCE can capture the entire FC image, permitting geometrical recognition of the FC zone geometry and reliable MR quantification.
Nelis, Sabine; Holmes, Emily A.; Griffith, James W.; Raes, Filip
2015-01-01
The Spontaneous Use of Imagery Scale (SUIS) is used to measure the tendency to use visual mental imagery in daily life. Its psychometric properties were evaluated in three independent samples (total N = 1297). We evaluated the internal consistency and test-retest reliability of the questionnaire. We also examined the structure of the items using exploratory and confirmatory factor analysis. Moreover, correlations with other imagery questionnaires provided evidence about convergent validity. The SUIS had acceptable reliability and convergent validity. Exploratory and confirmatory factor analysis revealed that a unidimensional structure fit the data, suggesting that the SUIS indeed measures a general use of mental imagery in daily life. Future research can further investigate and improve the psychometric properties of the SUIS. Moreover, the SUIS could be useful to determine how imagery relates to e.g. psychopathology. PMID:26290615
Combined GPS/GLONASS Precise Point Positioning with Fixed GPS Ambiguities
Pan, Lin; Cai, Changsheng; Santerre, Rock; Zhu, Jianjun
2014-01-01
Precise point positioning (PPP) technology is mostly implemented with an ambiguity-float solution. Its performance may be further improved by performing ambiguity-fixed resolution. Currently, the PPP integer ambiguity resolutions (IARs) are mainly based on GPS-only measurements. The integration of GPS and GLONASS can speed up the convergence and increase the accuracy of float ambiguity estimates, which contributes to enhancing the success rate and reliability of fixing ambiguities. This paper presents an approach of combined GPS/GLONASS PPP with fixed GPS ambiguities (GGPPP-FGA) in which GPS ambiguities are fixed into integers, while all GLONASS ambiguities are kept as float values. An improved minimum constellation method (MCM) is proposed to enhance the efficiency of GPS ambiguity fixing. Datasets from 20 globally distributed stations on two consecutive days are employed to investigate the performance of the GGPPP-FGA, including the positioning accuracy, convergence time and the time to first fix (TTFF). All datasets are processed for a time span of three hours in three scenarios, i.e., the GPS ambiguity-float solution, the GPS ambiguity-fixed resolution and the GGPPP-FGA resolution. The results indicate that the performance of the GPS ambiguity-fixed resolutions is significantly better than that of the GPS ambiguity-float solutions. In addition, the GGPPP-FGA improves the positioning accuracy by 38%, 25% and 44% and reduces the convergence time by 36%, 36% and 29% in the east, north and up coordinate components over the GPS-only ambiguity-fixed resolutions, respectively. Moreover, the TTFF is reduced by 27% after adding GLONASS observations. Wilcoxon rank sum tests and chi-square two-sample tests are made to examine the significance of the improvement on the positioning accuracy, convergence time and TTFF. PMID:25237901
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Chiang, Tzu-An; Che, Z H; Cui, Zhihua
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.
Chiang, Tzu-An; Che, Z. H.
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Mills, Sarah D; Kwakkenbos, Linda; Carrier, Marie-Eve; Gholizadeh, Shadi; Fox, Rina S; Jewett, Lisa R; Gottesman, Karen; Roesch, Scott C; Thombs, Brett D; Malcarne, Vanessa L
2018-01-17
Systemic sclerosis (SSc) is an autoimmune disease that can cause disfiguring changes in appearance. This study examined the structural validity, internal consistency reliability, convergent validity, and measurement equivalence of the Social Appearance Anxiety Scale (SAAS) across SSc disease subtypes. Patients enrolled in the Scleroderma Patient-centered Intervention Network Cohort completed the SAAS and measures of appearance-related concerns and psychological distress. Confirmatory factor analysis (CFA) was used to examine the structural validity of the SAAS. Multiple-group CFA was used to determine if SAAS scores can be compared across patients with limited and diffuse disease subtypes. Cronbach's alpha was used to examine internal consistency reliability. Correlations of SAAS scores with measures of body image dissatisfaction, fear of negative evaluation, social anxiety, and depression were used to examine convergent validity. SAAS scores were hypothesized to be positively associated with all convergent validity measures, with correlations significant and moderate to large in size. A total of 938 patients with SSc were included. CFA supported a one-factor structure (CFI: .92; SRMR: .04; RMSEA: .08), and multiple-group CFA indicated that the scalar invariance model best fit the data. Internal consistency reliability was good in the total sample (α = .96) and in disease subgroups. Overall, evidence of convergent validity was found with measures of body image dissatisfaction, fear of negative evaluation, social anxiety, and depression. The SAAS can be reliably and validly used to assess fear of appearance evaluation in patients with SSc, and SAAS scores can be meaningfully compared across disease subtypes. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Convergence Time towards Periodic Orbits in Discrete Dynamical Systems
San Martín, Jesús; Porter, Mason A.
2014-01-01
We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594
Convergence of the strong-potential-Born approximation in Z/sub less-than//Z/sub greater-than/
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuire, J.H.; Sil, N.C.
1986-01-01
Convergence of the strong-potential Born (SPB) approximation as a function of the charges of the projectile and target is studied numerically. Time-reversal invariance (or detailed balance) is satisfied at sufficiently high velocities even when the charges are asymmetric. This demonstarates that the SPB approximation converges to the correct result even when the charge of the ''weak'' potential, which is kept to first order, is larger than the charge of the ''strong'' potential, which is retained to all orders. Consequently, the SPB approximation is valid for systems of arbitrary charge symmetry (including symmetric systems) at sufficiently high velocities.
An investigation of the convergence to the stationary state in the Hassell mapping
NASA Astrophysics Data System (ADS)
de Mendonça, Hans M. J.; Leonel, Edson D.; de Oliveira, Juliano A.
2017-01-01
We investigate the convergence to the fixed point and near it in a transcritical bifurcation observed in a Hassell mapping. We considered a phenomenological description which was reinforced by a theoretical description. At the bifurcation, we confirm the convergence for the fixed point is characterized by a homogeneous function with three exponents. Near the bifurcation the decay to the fixed point is exponential with a relaxation time given by a power law. Although the expression of the mapping is different from the traditional logistic mapping, at the bifurcation and near it, the local dynamics is essentially the same for either mappings.
Zhang, Yao; Tang, Shengjing; Guo, Jie
2017-11-01
In this paper, a novel adaptive-gain fast super-twisting (AGFST) sliding mode attitude control synthesis is carried out for a reusable launch vehicle subject to actuator faults and unknown disturbances. According to the fast nonsingular terminal sliding mode surface (FNTSMS) and adaptive-gain fast super-twisting algorithm, an adaptive fault tolerant control law for the attitude stabilization is derived to protect against the actuator faults and unknown uncertainties. Firstly, a second-order nonlinear control-oriented model for the RLV is established by feedback linearization method. And on the basis a fast nonsingular terminal sliding mode (FNTSM) manifold is designed, which provides fast finite-time global convergence and avoids singularity problem as well as chattering phenomenon. Based on the merits of the standard super-twisting (ST) algorithm and fast reaching law with adaption, a novel adaptive-gain fast super-twisting (AGFST) algorithm is proposed for the finite-time fault tolerant attitude control problem of the RLV without any knowledge of the bounds of uncertainties and actuator faults. The important feature of the AGFST algorithm includes non-overestimating the values of the control gains and faster convergence speed than the standard ST algorithm. A formal proof of the finite-time stability of the closed-loop system is derived using the Lyapunov function technique. An estimation of the convergence time and accurate expression of convergence region are also provided. Finally, simulations are presented to illustrate the effectiveness and superiority of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Laws, Holly; Sayer, Aline G.; Pietromonaco, Paula R.; Powers, Sally I.
2015-01-01
Objective Drawing on theories of bidirectional influence between relationship partners (Butler, 2011; Diamond & Aspinwall, 2003), the authors applied dyadic analytic methods to test convergence in cortisol patterns over time in newlywed couples. Methods Previous studies of bidirectional influence in couples’ cortisol levels (Liu, Rovine, Klein, & Almeida, 2013, Papp, Pendry, Simon, & Adam, 2013; Saxbe & Repetti, 2010) found significant covariation in couples’ daily cortisol levels over several days, but no studies have tested whether cortisol response similarity increases over time using a longitudinal design. In the present study, 183 opposite-sex couples (366 participants) engaged in a conflict discussion in a laboratory visit about 6 months after their marriage, and again about 2 years into the marriage. At each visit, spouses provided saliva samples that indexed cortisol levels before, during, and after the discussion. This multi-measure procedure enabled modeling of spouses’ cortisol trajectories around the conflict discussion. Results Findings showed significant convergence in couples’ cortisol trajectories across the early years of marriage; couples showed significantly greater similarity in cortisol trajectories around the conflict discussion as their relationship matured. Cohabitation length predicted stronger convergence in cortisol slopes prior to the conflict discussion. Couples’ relationship dissatisfaction was associated with a greater degree of convergence in spouses’ acute cortisol levels during the conflict discussion. Conclusions Findings suggest that spouses increasingly shape each other’s cortisol responses as their relationship matures. Findings also indicated that increased similarity in acute cortisol levels during conflict may be associated with poorer relationship functioning. PMID:26010721
Laws, Holly B; Sayer, Aline G; Pietromonaco, Paula R; Powers, Sally I
2015-11-01
Drawing on theories of bidirectional influence between relationship partners (Butler, 2011; Diamond & Aspinwall, 2003), the authors applied dyadic analytic methods to test convergence in cortisol patterns over time in newlywed couples. Previous studies of bidirectional influence in couples' cortisol levels (Liu, Rovine, Klein, & Almeida, 2013; Papp, Pendry, Simon, & Adam, 2013; Saxbe & Repetti, 2010) found significant covariation in couples' daily cortisol levels over several days, but no studies have tested whether cortisol response similarity increases over time using a longitudinal design. In the present study, 183 opposite sex couples (366 participants) engaged in a conflict discussion in a laboratory visit about 6 months after their marriage, and again about 2 years into the marriage. At each visit, spouses provided saliva samples that indexed cortisol levels before, during, and after the discussion. This multimeasure procedure enabled modeling of spouses' cortisol trajectories around the conflict discussion. Findings showed significant convergence in couples' cortisol trajectories across the early years of marriage; couples showed significantly greater similarity in cortisol trajectories around the conflict discussion as their relationship matured. Cohabitation length predicted stronger convergence in cortisol slopes prior to the conflict discussion. Couples' relationship dissatisfaction was associated with a greater degree of convergence in spouses' acute cortisol levels during the conflict discussion. Findings suggest that spouses increasingly shape each other's cortisol responses as their relationship matures. Findings also indicated that increased similarity in acute cortisol levels during conflict may be associated with poorer relationship functioning. (c) 2015 APA, all rights reserved).
Franz, S; Schuld, C; Wilder-Smith, E P; Heutehaus, L; Lang, S; Gantz, S; Schuh-Hofer, S; Treede, R-D; Bryce, T N; Wang, H; Weidner, N
2017-11-01
Neuropathic pain (NeuP) is a frequent sequel of spinal cord injury (SCI). The SCI Pain Instrument (SCIPI) was developed as a SCI-specific NeuP screening tool. A preliminary validation reported encouraging results requiring further evaluation in terms of psychometric properties. The painDETECT questionnaire (PDQ), a commonly applied NeuP assessment tool, was primarily validated in German, but not specifically developed for SCI and not yet validated according to current diagnostic guidelines. We aimed to provide convergent construct validity and to identify the optimal item combination for the SCIPI. The PDQ was re-evaluated according to current guidelines with respect to SCI-related NeuP. Prospective monocentric study. Subjects received a neurological examination according to the International Standards for Neurological Classification of SCI. After linguistic validation of the SCIPI, the IASP-grading system served as reference to diagnose NeuP, accompanied by the PDQ after its re-evaluation as binary classifier. Statistics were evaluated through ROC-analysis, with the area under the ROC curve (AUROC) as optimality criterion. The SCIPI was refined by systematic item permutation. Eighty-eight individuals were assessed with the German SCIPI. Of 127 possible combinations, a 4-item-SCIPI (cut-off-score = 1.5/sensitivity = 0.864/specificity = 0.839) was identified as most reasonable. The SCIPI showed a strong correlation (r sp = 0.76) with PDQ. ROC-analysis of SCIPI/PDQ (AUROC = 0.877) revealed comparable results to SCIPI/IASP (AUROC = 0.916). ROC-analysis of PDQ/IASP delivered a score threshold of 10.5 (sensitivity = 0.727/specificity = 0.903). The SCIPI is a valid easy-to-apply NeuP screening tool in SCI. The PDQ is recommended as complementary NeuP assessment tool in SCI, e.g. to monitor pain severity and/or its time-dependent course. In SCI-related pain, both SCIPI and PainDETECT show strong convergent construct validity versus the current IASP-grading system. SCIPI is now optimized from a 7-item to an easy-to-apply 4-item screening tool in German and English. We provided evidence that the scope for PainDETECT can be expanded to individuals with SCI. © 2017 European Pain Federation - EFIC®.
McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael
2010-04-01
Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.
Welfare Impact of Virtual Trading on Wholesale Electricity Markets
NASA Astrophysics Data System (ADS)
Giraldo, Juan S.
Virtual bidding has become a standard feature of multi-settlement wholesale electricity markets in the United States. Virtual bids are financial instruments that allow market participants to take financial positions in the Day-Ahead (DA) market that are automatically reversed/closed in the Real-Time (RT) market. Most U.S. wholesale electricity markets only have two types of virtual bids: a decrement bid (DEC), which is virtual load, and an increment offer (INC), which is virtual generation. In theory, financial participants create benefits by seeking out profitable bidding opportunities through arbitrage or speculation. Benefits have been argued to take the form of increased competition, price convergence, increased market liquidity, and a more efficient dispatch of generation resources. Studies have found that price convergence between the DA and RT markets improved following the introduction of virtual bidding into wholesale electricity markets. The improvement in price convergence was taken as evidence that market efficiency had increased and many of the theoretical benefits realized. Persistent price differences between the DA and RT markets have led to calls to further expand virtual bidding as a means to address remaining market inefficiencies. However, the argument that price convergence is beneficial is extrapolated from the study of commodity and financial markets and the role of futures for increasing market efficiency in that context. This viewpoint largely ignores details that differentiate wholesale electricity markets from other commodity markets. This dissertation advances the understanding of virtual bidding by evaluating the impact of virtual bidding based on the standard definition of economic efficiency which is social welfare. In addition, an examination of the impacts of another type of virtual bid, up-to-congestion (UTC) transactions is presented. This virtual product significantly increased virtual bidding activity in the PJM interconnection market since it became available to be used by financial traders in September 2010. Stylized models are used to determine the optimal bidding strategy for the different virtual bids under different scenarios. The welfare analysis shows that the main impact of virtual bidding is surplus reallocation and that the impact on market efficiency is small by comparison. The market structure is such that it is more likely to see surplus transfers from consumers to producers. The results also show that outcomes with greater price convergence as a result of virtual bidding activity were not necessarily more efficient, nor do they always correct surplus distribution distortions that result from bias in the DA expectation of RT load. Compared to INCs and DECs, the UTC analysis showed that UTCs do not have the same self-corrective incentives towards price convergence and are less likely to lead to nodal price convergence or correct for surplus distribution distortions caused by uncertainty and bias in the DA expectation of RT load. Additionally, the analysis showed that UTCs allow financial traders to engage in low risk high volume trading strategies that, while profitable, may have little to no impact on price convergence or market efficiency.
WRF simulation of a severe hailstorm over Baramati: a study into the space-time evolution
NASA Astrophysics Data System (ADS)
Murthy, B. S.; Latha, R.; Madhuparna, H.
2018-04-01
Space-time evolution of a severe hailstorm occurred over the western India as revealed by WRF-ARW simulations are presented. We simulated a specific event centered over Baramati (18.15°N, 74.58°E, 537 m AMSL) on March 9, 2014. A physical mechanism, proposed as a conceptual model, signifies the role of multiple convective cells organizing through outflows leading to a cold frontal type flow, in the presence of a low over the northern Arabian Sea, propagates from NW to SE triggering deep convection and precipitation. A `U' shaped cold pool encircled by a converging boundary forms to the north of Baramati due to precipitation behind the moisture convergence line with strong updrafts ( 15 ms-1) leading to convective clouds extending up to 8 km in a narrow region of 30 km. The outflows from the convective clouds merge with the opposing southerly or southwesterly winds from the Arabian Sea and southerly or southeasterly winds from the Bay of Bengal resulting in moisture convergence (maximum 80 × 10-3 g kg-1 s-1). The vertical profile of the area-averaged moisture convergence over the cold pool shows strong convergence above 850 hPa and divergence near the surface indicating elevated convection. Radar reflectivity (50-60 dBZ) and vertical component of vorticity maximum ( 0.01-0.14 s-1) are observed along the convergence zone. Stratiform clouds ahead of the squall line and parallel wind flow at 850 hPa and nearly perpendicular flow at higher levels relative to squall line as evidenced by relatively low and wide-spread reflectivity suggests that organizational mode of squall line may be categorized as `Mixed Mode' type where northern part can be a parallel stratiform while the southern part resembles with a leading stratiform. Simulated rainfall (grid scale 27 km) leads the observed rainfall by 1 h while its magnitude is 2 times of the observed rainfall (grid scale 100 km) derived from Kalpana-1. Thus, this study indicates that under synoptically favorable conditions, WRF-ARW could simulate thunderstorm evolution reasonably well although there is some space-time error which might, perhaps, be the reason for lower CAPE (observed by upper air sounding) on the simulation day.
Method for thermal and structural evaluation of shallow intense-beam deposition in matter
NASA Astrophysics Data System (ADS)
Pilan Zanoni, André
2018-05-01
The projected range of high-intensity proton and heavy-ion beams at energies below a few tens of MeV/A in matter can be as short as a few micrometers. For the evaluation of temperature and stresses from a shallow beam energy deposition in matter conventional numerical 3D models require minuscule element sizes for acceptable element aspect ratio as well as extremely short time steps for numerical convergence. In order to simulate energy deposition using a manageable number of elements this article presents a method using layered elements. This method is applied to beam stoppers and accidental intense-beam impact onto UHV sector valves. In those cases the thermal results from the new method are congruent to those from conventional solid-element and adiabatic models.
NASA Technical Reports Server (NTRS)
Benton, E. R.
1986-01-01
A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.
Design of infrasound-detection system via adaptive LMSTDE algorithm
NASA Technical Reports Server (NTRS)
Khalaf, C. S.; Stoughton, J. W.
1984-01-01
A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.
Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter.
Johnson, W R; Nilsen, J
2016-03-01
The influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity and also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.
Average-atom treatment of relaxation time in x-ray Thomson scattering from warm dense matter
Johnson, W. R.; Nilsen, J.
2016-03-14
Here, the influence of finite relaxation times on Thomson scattering from warm dense plasmas is examined within the framework of the average-atom approximation. Presently most calculations use the collision-free Lindhard dielectric function to evaluate the free-electron contribution to the Thomson cross section. In this work, we use the Mermin dielectric function, which includes relaxation time explicitly. The relaxation time is evaluated by treating the average atom as an impurity in a uniform electron gas and depends critically on the transport cross section. The calculated relaxation rates agree well with values inferred from the Ziman formula for the static conductivity andmore » also with rates inferred from a fit to the frequency-dependent conductivity. Transport cross sections determined by the phase-shift analysis in the average-atom potential are compared with those evaluated in the commonly used Born approximation. The Born approximation converges to the exact cross sections at high energies; however, differences that occur at low energies lead to corresponding differences in relaxation rates. The relative importance of including relaxation time when modeling x-ray Thomson scattering spectra is examined by comparing calculations of the free-electron dynamic structure function for Thomson scattering using Lindhard and Mermin dielectric functions. Applications are given to warm dense Be plasmas, with temperatures ranging from 2 to 32 eV and densities ranging from 2 to 64 g/cc.« less
NASA Astrophysics Data System (ADS)
Qu, Xiaolei; Azuma, Takashi; Lin, Hongxiang; Takeuchi, Hideki; Itani, Kazunori; Tamano, Satoshi; Takagi, Shu; Sakuma, Ichiro
2017-03-01
Sarcopenia is the degenerative loss of skeletal muscle ability associated with aging. One reason is the increasing of adipose ratio of muscle, which can be estimated by the speed of sound (SOS), since SOSs of muscle and adipose are different (about 7%). For SOS imaging, the conventional bent-ray method iteratively finds ray paths and corrects SOS along them by travel-time. However, the iteration is difficult to converge for soft tissue with bone inside, because of large speed variation. In this study, the bent-ray method is modified to produce SOS images for limb muscle with bone inside. The modified method includes three steps. First, travel-time is picked up by a proposed Akaike Information Criterion (AIC) with energy term (AICE) method. The energy term is employed for detecting and abandoning the transmissive wave through bone (low energy wave). It results in failed reconstruction for bone, but makes iteration convergence and gives correct SOS for skeletal muscle. Second, ray paths are traced using Fermat's principle. Finally, simultaneous algebraic reconstruction technique (SART) is employed to correct SOS along ray paths, but excluding paths with low energy wave which may pass through bone. The simulation evaluation was implemented by k-wave toolbox using a model of upper arm. As the result, SOS of muscle was 1572.0+/-7.3 m/s, closing to 1567.0 m/s in the model. For vivo evaluation, a ring transducer prototype was employed to scan the cross sections of lower arm and leg of a healthy volunteer. And the skeletal muscle SOSs were 1564.0+/-14.8 m/s and 1564.1±18.0 m/s, respectively.
Zong, Qun; Shao, Shikai
2016-11-01
This paper investigates decentralized finite-time attitude synchronization for a group of rigid spacecraft by using quaternion with the consideration of environmental disturbances, inertia uncertainties and actuator saturation. Nonsingular terminal sliding mode (TSM) is used for controller design. Firstly, a theorem is proven that there always exists a kind of TSM that converges faster than fast terminal sliding mode (FTSM) for quaternion-descripted attitude control system. Controller with this kind of TSM has faster convergence and reduced computation than FTSM controller. Then, combining with an adaptive parameter estimation strategy, a novel terminal sliding mode disturbance observer is proposed. The proposed disturbance observer needs no upper bound information of the lumped uncertainties or their derivatives. On the basis of undirected topology and the disturbance observer, decentralized attitude synchronization control laws are designed and all attitude errors are ensured to converge to small regions in finite time. As for actuator saturation problem, an auxiliary variable is introduced and accommodated by the disturbance observer. Finally, simulation results are given and the effectiveness of the proposed control scheme is testified. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lv, Yongfeng; Na, Jing; Yang, Qinmin; Wu, Xing; Guo, Yu
2016-01-01
An online adaptive optimal control is proposed for continuous-time nonlinear systems with completely unknown dynamics, which is achieved by developing a novel identifier-critic-based approximate dynamic programming algorithm with a dual neural network (NN) approximation structure. First, an adaptive NN identifier is designed to obviate the requirement of complete knowledge of system dynamics, and a critic NN is employed to approximate the optimal value function. Then, the optimal control law is computed based on the information from the identifier NN and the critic NN, so that the actor NN is not needed. In particular, a novel adaptive law design method with the parameter estimation error is proposed to online update the weights of both identifier NN and critic NN simultaneously, which converge to small neighbourhoods around their ideal values. The closed-loop system stability and the convergence to small vicinity around the optimal solution are all proved by means of the Lyapunov theory. The proposed adaptation algorithm is also improved to achieve finite-time convergence of the NN weights. Finally, simulation results are provided to exemplify the efficacy of the proposed methods.
Beyond Reasonable Doubt: Evolution from DNA Sequences
Penny, David
2013-01-01
We demonstrate quantitatively that, as predicted by evolutionary theory, sequences of homologous proteins from different species converge as we go further and further back in time. The converse, a non-evolutionary model can be expressed as probabilities, and the test works for chloroplast, nuclear and mitochondrial sequences, as well as for sequences that diverged at different time depths. Even on our conservative test, the probability that chance could produce the observed levels of ancestral convergence for just one of the eight datasets of 51 proteins is ≈1×10−19 and combined over 8 datasets is ≈1×10−132. By comparison, there are about 1080 protons in the universe, hence the probability that the sequences could have been produced by a process involving unrelated ancestral sequences is about 1050 lower than picking, among all protons, the same proton at random twice in a row. A non-evolutionary control model shows no convergence, and only a small number of parameters are required to account for the observations. It is time that that researchers insisted that doubters put up testable alternatives to evolution. PMID:23950906
Evaluation of Convergent Spray Technology(TM) Spray Process for Roof Coating Application
NASA Technical Reports Server (NTRS)
Scarpa, J.; Creighton, B.; Hall, T.; Hamlin, K.; Howard, T.
1998-01-01
The overall goal of this project was to demonstrate the feasibility of(CST) Convergent Spray Technology (Trademark) for the roofing industry. This was accomplished by producing an environmentally compliant coating utilization recycled materials, a CST(Trademark) spray process portable application cart, and hand-held applicator with a CST(Trademark) spray process nozzle. The project culminated with application of this coating to a nine hundred sixty square foot metal for NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
Utilizing the Convergence of Data for Expedited Evaluations: Guidelines for School Psychologists
ERIC Educational Resources Information Center
Clopton, Kerri L.; Etscheidt, Susan
2009-01-01
The purpose of this article is to propose that a combined response to intervention (RTI)-psychoeducational assessment model be used for expedited evaluations required during disciplinary proceedings [Individuals with Disabilities Education Improvement Act, 20 U.S.C. Section 1415(k)(5)(D)(ii)]. An expedited evaluation would determine if the child…
Simulations of Converging Shock Collisions for Shock Ignition
NASA Astrophysics Data System (ADS)
Sauppe, Joshua; Dodd, Evan; Loomis, Eric
2016-10-01
Shock ignition (SI) has been proposed as an alternative to achieving high gain in inertial confinement fusion (ICF) targets. A central hot spot below the ignition threshold is created by an initial compression pulse, and a second laser pulse drives a strong converging shock into the fuel. The collision between the rebounding shock from the compression pulse and the converging shock results in amplification of the converging shock and increases the hot spot pressure above the ignition threshold. We investigate shock collision in SI drive schemes for cylindrical targets with a polystyrene foam interior using radiation-hydrodynamics simulations with the RAGE code. The configuration is similar to previous targets fielded on the Omega laser. The CH interior results in a lower convergence ratio and the cylindrical geometry facilitates visualization of the shock transit using an axial X-ray backlighter, both of which are important for comparison to potential experimental measurements. One-dimensional simulations are used to determine shock timing, and the effects of low mode asymmetries in 2D computations are also quantified. LA-UR-16-24773.
The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems
NASA Technical Reports Server (NTRS)
Eliasson, Peter; Engquist, Bjoern
1996-01-01
The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.
NASA Astrophysics Data System (ADS)
Doi, Akihiro; Hada, Kazuhiro; Kino, Motoki; Wajima, Kiyoaki; Nakahara, Satomi
2018-04-01
We report the discovery of a local convergence of a jet cross section in the quasi-stationary jet feature in the γ-ray-emitting narrow-line Seyfert 1 galaxy (NLS1) 1H 0323+342. The convergence site is located at ∼7 mas (corresponding to the order of 100 pc in deprojection) from the central engine. We also found limb-brightened jet structures at both the upstream and downstream of the convergence site. We propose that the quasi-stationary feature showing the jet convergence and limb-brightening occurs as a consequence of recollimation shock in the relativistic jets. The quasi-stationary feature is one of the possible γ-ray-emitting sites in this NLS1, in analogy with the HST-1 complex in the M87 jet. Monitoring observations have revealed that superluminal components passed through the convergence site and the peak intensity of the quasi-stationary feature, which showed apparent coincidences with the timing of observed γ-ray activities.
Void Growth and Coalescence Simulations
2013-08-01
distortion and damage, minimum time step, and appropriate material model parameters. Further, a temporal and spatial convergence study was used to...estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we use a Gurson model with Johnson-Cook...spatial convergence study was used to estimate errors, thus, this study helps to provide guidelines for modeling of materials with voids. Finally, we
ERIC Educational Resources Information Center
Sánchez-Rosas, Javier; Furlan, Luis Alberto
2017-01-01
Based on the control-value theory of achievement emotions and theory of achievement goals, this research provides evidence of convergent, divergent, and criterion validity of the Spanish Cognitive Test Anxiety Scale (S-CTAS). A sample of Argentinean undergraduates responded to several scales administered at three points. At time 1 and 3, the…
NASA Astrophysics Data System (ADS)
Phillips, David A.
The southwest Pacific is one of the most tectonically dynamic regions on Earth. This research focused on crustal motion studies in three regions of active Pacific-Australia plate convergence in the southwest Pacific: Tonga, the New Hebrides (Vanuatu) and the Solomons Islands. In Tonga, new and refined velocity estimates based on more than a decade of Global Positioning System (GPS) measurements and advanced analysis techniques are much more accurate than previously reported values. Convergence rates of 80 to 165 mm/yr at the Tonga trench represent the fastest plate motions observed on Earth. For the first time, rotation of the Fiji platform relative to the Australian plate is observed, and anomalous deformation of the Tonga ridge was also detected. In the New Hebrides, a combined GPS dataset with a total time series of more than ten years led to new and refined velocity estimates throughout the island arc. Impingement of large bathymetric features has led to arc fragmentation, and four distinct tectonic segments are identified. The central New Hebrides arc segment is being shoved eastward relative to the rest of the arc as convergence is partitioned between the forearc (Australian plate) and the backarc (North Fiji Basin) boundaries due to impingement of the d'Entrecasteaux Ridge and associated Bougainville seamount. The southern New Hebrides arc converges with the Australian plate more rapidly than predicted due to backarc extension. The first measurements of convergence in the northern and southernmost arc segments were also made. In the Solomon Islands, a four-year GPS time series was used to generate the first geodetic estimates of crustal velocity in the New Georgia Group, with 57--84 mm/yr of Australia-Solomon motion and 19--39 mm/yr of Pacific-Solomon motion being observed. These velocities are 20--40% lower than predicted Australia-Pacific velocities. Two-dimensional dislocation models suggest that most of this discrepancy can be attributed to locking of the San Cristobal trench and elastic strain accumulation in the forearc. Anomalous motion at Simbo island is also observed.
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong
2018-02-01
A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
Dominant takeover regimes for genetic algorithms
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
The genetic algorithm (GA) is a machine-based optimization routine which connects evolutionary learning to natural genetic laws. The present work addresses the problem of obtaining the dominant takeover regimes in the GA dynamics. Estimated GA run times are computed for slow and fast convergence in the limits of high and low fitness ratios. Using Euler's device for obtaining partial sums in closed forms, the result relaxes the previously held requirements for long time limits. Analytical solution reveal that appropriately accelerated regimes can mark the ascendancy of the most fit solution. In virtually all cases, the weak (logarithmic) dependence of convergence time on problem size demonstrates the potential for the GA to solve large N-P complete problems.
Evaluation of a new parallel numerical parameter optimization algorithm for a dynamical system
NASA Astrophysics Data System (ADS)
Duran, Ahmet; Tuncel, Mehmet
2016-10-01
It is important to have a scalable parallel numerical parameter optimization algorithm for a dynamical system used in financial applications where time limitation is crucial. We use Message Passing Interface parallel programming and present such a new parallel algorithm for parameter estimation. For example, we apply the algorithm to the asset flow differential equations that have been developed and analyzed since 1989 (see [3-6] and references contained therein). We achieved speed-up for some time series to run up to 512 cores (see [10]). Unlike [10], we consider more extensive financial market situations, for example, in presence of low volatility, high volatility and stock market price at a discount/premium to its net asset value with varying magnitude, in this work. Moreover, we evaluated the convergence of the model parameter vector, the nonlinear least squares error and maximum improvement factor to quantify the success of the optimization process depending on the number of initial parameter vectors.
NASA Astrophysics Data System (ADS)
Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.
2011-08-01
Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
Reply to "Comment on `Route from discreteness to the continuum for the Tsallis q -entropy' "
NASA Astrophysics Data System (ADS)
Oikonomou, Thomas; Bagci, G. Baris
2018-06-01
It has been known for some time that the usual q -entropy Sq(n ) cannot be shown to converge to the continuous case. In Phys. Rev. E 97, 012104 (2018), 10.1103/PhysRevE.97.012104, we have shown that the discrete q -entropy S˜q(n ) converges to the continuous case when the total number of states are properly taken into account in terms of a convergence factor. Ou and Abe [previous Comment, Phys. Rev. E 97, 066101 (2018), 10.1103/PhysRevE.97.066101] noted that this form of the discrete q -entropy does not conform to the Shannon-Khinchin expandability axiom. As a reply, we note that the fulfillment or not of the expandability property by the discrete q -entropy strongly depends on the origin of the convergence factor, presenting an example in which S˜q(n ) is expandable.
Spitzer, Ada; Camus, Didier; Desaulles, Cécile; Kuhne, Nicolas
2006-10-01
Many countries reorganizing their health services are drawn toward similar reform programs and tend to experience what seem to be similar problems relating to implementation outcomes. One such problem is the major crisis within the nursing profession relating to the labor market, working conditions and level of autonomy. This research examines the thesis that the profile of nursing problems is global (the 'convergence' thesis) by comparing the changing hospital contexts nursing has been confronting in 20 Western European countries between 1990 and 2001. The analysis indicates that in spite of growing convergence, the divergence in patient care processes, workforce composition and resources allocated for care is still rather remarkable and that similarity or divergence between countries changes over time. This contextual variability highlights why problems such as the crisis of the nursing profession must be analysed from a divergent rather than a convergent perspective.
Efficient Multi-Stage Time Marching for Viscous Flows via Local Preconditioning
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.; vanLeer, Bram
1999-01-01
A new method has been developed to accelerate the convergence of explicit time-marching, laminar, Navier-Stokes codes through the combination of local preconditioning and multi-stage time marching optimization. Local preconditioning is a technique to modify the time-dependent equations so that all information moves or decays at nearly the same rate, thus relieving the stiffness for a system of equations. Multi-stage time marching can be optimized by modifying its coefficients to account for the presence of viscous terms, allowing larger time steps. We show it is possible to optimize the time marching scheme for a wide range of cell Reynolds numbers for the scalar advection-diffusion equation, and local preconditioning allows this optimization to be applied to the Navier-Stokes equations. Convergence acceleration of the new method is demonstrated through numerical experiments with circular advection and laminar boundary-layer flow over a flat plate.
Maidment, Susannah C R; Barrett, Paul M
2012-09-22
Convergent morphologies are thought to indicate functional similarity, arising because of a limited number of evolutionary or developmental pathways. Extant taxa displaying convergent morphologies are used as analogues to assess function in extinct taxa with similar characteristics. However, functional studies of extant taxa have shown that functional similarity can arise from differing morphologies, calling into question the paradigm that form and function are closely related. We test the hypothesis that convergent skeletal morphology indicates functional similarity in the fossil record using ornithischian dinosaurs. The rare transition from bipedality to quadrupedality occurred at least three times independently in this clade, resulting in a suite of convergent osteological characteristics. We use homology rather than analogy to provide an independent line of evidence about function, reconstructing soft tissues using the extant phylogenetic bracket and applying biomechanical concepts to produce qualitative assessments of muscle leverage. We also optimize character changes to investigate the sequence of character acquisition. Different lineages of quadrupedal ornithischian dinosaur stood and walked differently from each other, falsifying the hypothesis that osteological convergence indicates functional similarity. The acquisition of features correlated with quadrupedalism generally occurs in the same order in each clade, suggesting underlying developmental mechanisms that act as evolutionary constraints.
Simulating Roll Clouds associated with Low-Level Convergence.
NASA Astrophysics Data System (ADS)
Prasad, A. A.; Sherwood, S. C.
2015-12-01
Convective initiation often takes place when features such as fronts and/or rolls collide, merge or otherwise meet. Rolls indicate boundary layer convergence and may initiate thunderstorms. These are often seen in satellite and radar imagery prior to the onset of deep convection. However, links between convergence driven rolls and convection are poor in global models. The poor representation of convection is the source of many model biases, especially over the Maritime Continent in the Tropics. We simulate low-level convergence lines over north-eastern Australia using the Weather Research and Forecasting (WRF) Model (version 3.7). The simulations are events from September-October 2002 driven by sea breeze circulations. Cloud lines associated with bore-waves that form along the low-level convergence lines are thoroughly investigated in this study with comparisons from satellite and surface observations. Initial simulations for a series of cloud lines observed on 4th October, 2002 over the Gulf of Carpentaria showed greater agreement in the timing and propagation of the disturbance and the low-level convergence, however the cloud lines or streets of roll clouds were not properly captured by the model. Results from a number of WRF simulations with different microphysics, cumulus and planetary boundary layer schemes, resolution and boundary conditions will also be discussed.
Effective field theory in the harmonic oscillator basis
Binder, S.; Ekström, Jan A.; Hagen, Gaute; ...
2016-04-25
In this paper, we develop interactions from chiral effective field theory (EFT) that are tailored to the harmonic oscillator basis. As a consequence, ultraviolet convergence with respect to the model space is implemented by construction and infrared convergence can be achieved by enlarging the model space for the kinetic energy. In oscillator EFT, matrix elements of EFTs formulated for continuous momenta are evaluated at the discrete momenta that stem from the diagonalization of the kinetic energy in the finite oscillator space. By fitting to realistic phase shifts and deuteron data we construct an effective interaction from chiral EFT at next-to-leadingmore » order. Finally, many-body coupled-cluster calculations of nuclei up to 132Sn converge fast for the ground-state energies and radii in feasible model spaces.« less
Evaluating the links between climate, disease spread, and amphibian declines.
Rohr, Jason R; Raffel, Thomas R; Romansic, John M; McCallum, Hamish; Hudson, Peter J
2008-11-11
Human alteration of the environment has arguably propelled the Earth into its sixth mass extinction event and amphibians, the most threatened of all vertebrate taxa, are at the forefront. Many of the worldwide amphibian declines have been caused by the chytrid fungus, Batrachochytrium dendrobatidis (Bd), and two contrasting hypotheses have been proposed to explain these declines. Positive correlations between global warming and Bd-related declines sparked the chytrid-thermal-optimum hypothesis, which proposes that global warming increased cloud cover in warm years that drove the convergence of daytime and nighttime temperatures toward the thermal optimum for Bd growth. In contrast, the spatiotemporal-spread hypothesis states that Bd-related declines are caused by the introduction and spread of Bd, independent of climate change. We provide a rigorous test of these hypotheses by evaluating (i) whether cloud cover, temperature convergence, and predicted temperature-dependent Bd growth are significant positive predictors of amphibian extinctions in the genus Atelopus and (ii) whether spatial structure in the timing of these extinctions can be detected without making assumptions about the location, timing, or number of Bd emergences. We show that there is spatial structure to the timing of Atelopus spp. extinctions but that the cause of this structure remains equivocal, emphasizing the need for further molecular characterization of Bd. We also show that the reported positive multi-decade correlation between Atelopus spp. extinctions and mean tropical air temperature in the previous year is indeed robust, but the evidence that it is causal is weak because numerous other variables, including regional banana and beer production, were better predictors of these extinctions. Finally, almost all of our findings were opposite to the predictions of the chytrid-thermal-optimum hypothesis. Although climate change is likely to play an important role in worldwide amphibian declines, more convincing evidence is needed of a causal link.
Upwind relaxation methods for the Navier-Stokes equations using inner iterations
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.
1992-01-01
A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.
NASA Astrophysics Data System (ADS)
Dinar, Ariel; Aillery, Marcel P.; Moore, Michael R.
1993-06-01
This paper presents a dynamic model of irrigated agriculture that accounts for drainage generation and salinity accumulation. Critical model relationships involving crop production, soil salinity, and irrigation drainage are based on newly estimated functions derived from lysimeter field tests. The model allocates land and water inputs over time based on an intertemporal profit maximization objective function and soil salinity accumulation process. The model is applied to conditions in the San Joaquin Valley of California, where environmental degradation from irrigation drainage has become a policy issue. Findings indicate that in the absence of regulation, drainage volumes increase over time before reaching a steady state as increased quantities of water are allocated to leaching soil salts. The model is used to evaluate alternative drainage abatement scenarios involving drainage quotas and taxes, water supply quotas and taxes, and irrigation technology subsidies. In our example, direct drainage policies are more cost-effective in reducing drainage than policies operating indirectly through surface water use, although differences in cost efficiency are relatively small. In some cases, efforts to control drainage may result in increased soil salinity accumulation, with implications for long-term cropland productivity. While policy adjustments may alter the direction and duration of convergence to a steady state, findings suggest that a dynamic model specification may not be necessary due to rapid convergence to a comon steady state under selected scenarios.
Cui, T.J.; Chew, W.C.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2001-01-01
In this paper, numerical simulations of a new enhanced very early time electromagnetic (VETEM) prototype system are presented, where a horizontal transmitting loop and two horizontal receiving loops are used to detect buried targets, in which three loops share the same axis and the transmitter is located at the center of receivers. In the new VETEM system, the difference of signals from two receivers is taken to eliminate strong direct-signals from the transmitter and background clutter and furthermore to obtain a better SNR for buried targets. Because strong coupling exists between the transmitter and receivers, accurate analysis of the three-loop antenna system is required, for which a loop-tree basis function method has been utilized to overcome the low-frequency breakdown problem. In the analysis of scattering problem from buried targets, a conjugate gradient (CG) method with fast Fourier transform (FFT) is applied to solve the electric field integral equation. However, the convergence of such CG-FFT algorithm is extremely slow at very low frequencies. In order to increase the convergence rate, a frequency-hopping approach has been used. Finally, the primary, coupling, reflected, and scattered magnetic fields are evaluated at receiving loops to calculate the output electric current. Numerous simulation results are given to interpret the new VETEM system. Comparing with other single-transmitter-receiver systems, the new VETEM has better SNR and ability to reduce the clutter.
Bonaccorsi, Ivana; Cacciola, Francesco; Utczas, Margita; Inferrera, Veronica; Giuffrida, Daniele; Donato, Paola; Dugo, Paola; Mondello, Luigi
2016-09-01
Offline multidimensional supercritical fluid chromatography combined with reversed-phase liquid chromatography was employed for the carotenoid and chlorophyll characterization in different sweet bell peppers (Capsicum annuum L.) for the first time. The first dimension consisted of an Acquity HSS C18 SB (100 × 3 mm id, 1.8 μm particles) column operated with a supercritical mobile phase in an ultra-performance convergence chromatography system, whereas the second dimension was performed in reversed-phase mode with a C30 (250 × 4.6 mm id, 3.0 μm particles) stationary phase combined with photodiode array and mass spectrometry detection. This approach allowed the determination of 115 different compounds belonging to chlorophylls, free xanthophylls, free carotenes, xanthophyll monoesters, and xanthophyll diesters, and proved to be a significant improvement in the pigments determination compared to the conventional one-dimensional liquid chromatography approach so far applied to the carotenoid analysis in the studied species. Moreover, the present study also aimed to investigate and to compare the carotenoid stability and composition in overripe yellow and red bell peppers collected directly from the plant, thus also evaluating whether biochemical changes are linked to carotenoid degradation in the nonclimacteric investigated fruits, for the first time. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Clinical test responses to different orthoptic exercise regimes in typical young adults
Horwood, Anna; Toor, Sonia
2014-01-01
Purpose The relative efficiency of different eye exercise regimes is unclear, and in particular the influences of practice, placebo and the amount of effort required are rarely considered. This study measured conventional clinical measures following different regimes in typical young adults. Methods A total of 156 asymptomatic young adults were directed to carry out eye exercises three times daily for 2 weeks. Exercises were directed at improving blur responses (accommodation), disparity responses (convergence), both in a naturalistic relationship, convergence in excess of accommodation, accommodation in excess of convergence, and a placebo regime. They were compared to two control groups, neither of which were given exercises, but the second of which were asked to make maximum effort during the second testing. Results Instruction set and participant effort were more effective than many exercises. Convergence exercises independent of accommodation were the most effective treatment, followed by accommodation exercises, and both regimes resulted in changes in both vergence and accommodation test responses. Exercises targeting convergence and accommodation working together were less effective than those where they were separated. Accommodation measures were prone to large instruction/effort effects and monocular accommodation facility was subject to large practice effects. Conclusions Separating convergence and accommodation exercises seemed more effective than exercising both systems concurrently and suggests that stimulation of accommodation and convergence may act in an additive fashion to aid responses. Instruction/effort effects are large and should be carefully controlled if claims for the efficacy of any exercise regime are to be made. PMID:24471739
Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu
2016-01-01
The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784
[Variables effecting casting accuracy of quick heating casting investments].
Takahashi, H; Nakamura, H; Iwasaki, N; Morita, N; Habu, N; Nishimura, F
1994-06-01
Recently, several new products of investments for "quick heating" have been put on the Japanese market. The total casting procedure time for this quick heating method involves only one hour; 30-minutes waiting after the start of mixing before placing the mold directly into the 700 degrees C furnace and 30-minutes heating in the furnace. The purpose of this study was to evaluate two variables effecting casting accuracy using these new investments. The effect of thickness of the casting liner inside the casting ring and the effect of waiting time before placing the mold into the 700 degrees C furnace were evaluated. A stainless-steel die with a convergence angle of 8 degrees was employed. Marginal discrepancies of the crown between the wax patterns and castings were measured. The size of the cast crown became larger when the thickness of the ring liner was thick and when the waiting time before placing the mold into the furnace was long. These results suggest that these new investments have the advantage of providing sound castings using short-time casting procedures. However, it is necessary to pay careful attention to the casting conditions for obtaining reproducible castings.
Hybrid Upwinding for Two-Phase Flow in Heterogeneous Porous Media with Buoyancy and Capillarity
NASA Astrophysics Data System (ADS)
Hamon, F. P.; Mallison, B.; Tchelepi, H.
2016-12-01
In subsurface flow simulation, efficient discretization schemes for the partial differential equations governing multiphase flow and transport are critical. For highly heterogeneous porous media, the temporal discretization of choice is often the unconditionally stable fully implicit (backward-Euler) method. In this scheme, the simultaneous update of all the degrees of freedom requires solving large algebraic nonlinear systems at each time step using Newton's method. This is computationally expensive, especially in the presence of strong capillary effects driven by abrupt changes in porosity and permeability between different rock types. Therefore, discretization schemes that reduce the simulation cost by improving the nonlinear convergence rate are highly desirable. To speed up nonlinear convergence, we present an efficient fully implicit finite-volume scheme for immiscible two-phase flow in the presence of strong capillary forces. In this scheme, the discrete viscous, buoyancy, and capillary spatial terms are evaluated separately based on physical considerations. We build on previous work on Implicit Hybrid Upwinding (IHU) by using the upstream saturations with respect to the total velocity to compute the relative permeabilities in the viscous term, and by determining the directionality of the buoyancy term based on the phase density differences. The capillary numerical flux is decomposed into a rock- and geometry-dependent transmissibility factor, a nonlinear capillary diffusion coefficient, and an approximation of the saturation gradient. Combining the viscous, buoyancy, and capillary terms, we obtain a numerical flux that is consistent, bounded, differentiable, and monotone for homogeneous one-dimensional flow. The proposed scheme also accounts for spatially discontinuous capillary pressure functions. Specifically, at the interface between two rock types, the numerical scheme accurately honors the entry pressure condition by solving a local nonlinear problem to compute the numerical flux. Heterogeneous numerical tests demonstrate that this extended IHU scheme is non-oscillatory and convergent upon refinement. They also illustrate the superior accuracy and nonlinear convergence rate of the IHU scheme compared with the standard phase-based upstream weighting approach.
Student-Centered Classrooms: Past Initiatives, Future Practices
ERIC Educational Resources Information Center
Hansen, Dee; Imse, Leslie A.
2016-01-01
Music teacher evaluations traditionally examine how teachers develop student music-learning objectives, assess cognitive and performance skills, and direct classroom learning experiences and behavior. A convergence of past and current educational ideas and directives is changing how teachers are evaluated on their use of student-centered…
Development and evaluation of the INSPIRE measure of staff support for personal recovery.
Williams, Julie; Leamy, Mary; Bird, Victoria; Le Boutillier, Clair; Norton, Sam; Pesola, Francesca; Slade, Mike
2015-05-01
No individualised standardised measure of staff support for mental health recovery exists. To develop and evaluate a measure of staff support for recovery. initial draft of measure based on systematic review of recovery processes; consultation (n = 61); and piloting (n = 20). Psychometric evaluation: three rounds of data collection from mental health service users (n = 92). INSPIRE has two sub-scales. The 20-item Support sub-scale has convergent validity (0.60) and adequate sensitivity to change. Exploratory factor analysis (variance 71.4-85.1 %, Kaiser-Meyer-Olkin 0.65-0.78) and internal consistency (range 0.82-0.85) indicate each recovery domain is adequately assessed. The 7-item Relationship sub-scale has convergent validity 0.69, test-retest reliability 0.75, internal consistency 0.89, a one-factor solution (variance 70.5 %, KMO 0.84) and adequate sensitivity to change. A 5-item Brief INSPIRE was also evaluated. INSPIRE and Brief INSPIRE demonstrate adequate psychometric properties, and can be recommended for research and clinical use.
NASA Astrophysics Data System (ADS)
Pervez, M. S.; McNally, A.; Arsenault, K. R.
2017-12-01
Convergence of evidence from different agro-hydrologic sources is particularly important for drought monitoring in data sparse regions. In Africa, a combination of remote sensing and land surface modeling experiments are used to evaluate past, present and future drought conditions. The Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System (FLDAS) routinely simulates daily soil moisture, evapotranspiration (ET) and other variables over Africa using multiple models and inputs. We found that Noah 3.3, Variable Infiltration Capacity (VIC) 4.1.2, and Catchment Land Surface Model based FLDAS simulations of monthly soil moisture percentile maps captured concurrent drought and water surplus episodes effectively over East Africa. However, the results are sensitive to selection of land surface model and hydrometeorological forcings. We seek to identify sources of uncertainty (input, model, parameter) to eventually improve the accuracy of FLDAS outputs. In absence of in situ data, previous work used European Space Agency Climate Change Initiative Soil Moisture (CCI-SM) data measured from merged active-passive microwave remote sensing to evaluate FLDAS soil moisture, and found that during the high rainfall months of April-May and November-December Noah-based soil moisture correlate well with CCI-SM over the Greater Horn of Africa region. We have found good correlations (r>0.6) for FLDAS Noah 3.3 ET anomalies and Operational Simplified Surface Energy Balance (SSEBop) ET over East Africa. Recently, SSEBop ET estimates (version 4) were improved by implementing a land surface temperature correction factor. We re-evaluate the correlations between FLDAS ET and version 4 SSEBop ET. To further investigate the reasons for differences between models we evaluate FLDAS soil moisture with Advanced Scatterometer and SMAP soil moisture and FLDAS outputs with MODIS and AVHRR normalized difference vegetation index. By exploring longer historic time series and near-real time products we will be aiding convergence of evidence for better understanding of historic drought, improved monitoring and forecasting, and better understanding of uncertainties of water availability estimation over Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan S; Krishnamurthy, Dheepak; Top, Philip
This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layeredmore » co-simulation architecture.« less
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya
2016-11-01
This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.
Rainfall Morphology in Semi-Tropical Convergence Zones
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Ferrier, Brad S.; Ray, Peter S.
2000-01-01
Central Florida is the ideal test laboratory for studying convergence zone-induced convection. The region regularly experiences sea breeze fronts and rainfall-induced outflow boundaries. The focus of this study is the common yet poorly-studied convergence zone established by the interaction of the sea breeze front and an outflow boundary. Previous studies have investigated mechanisms primarily affecting storm initiation by such convergence zones. Few have focused on rainfall morphology yet these storms contribute a significant amount precipitation to the annual rainfall budget. Low-level convergence and mid-tropospheric moisture have both been shown to correlate with rainfall amounts in Florida. Using 2D and 3D numerical simulations, the roles of low-level convergence and mid-tropospheric moisture in rainfall evolution are examined. The results indicate that time-averaged, vertical moisture flux (VMF) at the sea breeze front/outflow convergence zone is directly and linearly proportional to initial condensation rates. This proportionality establishes a similar relationship between VMF and initial rainfall. Vertical moisture flux, which encompasses depth and magnitude of convergence, is better correlated to initial rainfall production than surface moisture convergence. This extends early observational studies which linked rainfall in Florida to surface moisture convergence. The amount and distribution of mid-tropospheric moisture determines how rainfall associated with secondary cells develop. Rainfall amount and efficiency varied significantly over an observable range of relative humidities in the 850- 500 mb layer even though rainfall evolution was similar during the initial or "first-cell" period. Rainfall variability was attributed to drier mid-tropospheric environments inhibiting secondary cell development through entrainment effects. Observationally, 850-500 mb moisture structure exhibits wider variability than lower level moisture, which is virtually always present in Florida. A likely consequence of the variability in 850-500 moisture is a stronger statistical correlation to rainfall, which observational studies have noted. The study indicates that vertical moisture flux forcing at convergence zones is critical in determining rainfall in the initial stage of development but plays a decreasing role in rainfall evolution as the system matures. The mid-tropospheric moisture (e.g. environment) plays an increasing role in rainfall evolution as the system matures. This suggests the need to improve measurements of magnitude/depth of convergence and mid-tropospheric moisture distribution. It also highlights the need for better parameterization of entrainment and vertical moisture distribution in larger-scale models.
Thepsoonthorn, C.; Yokozuka, T.; Miura, S.; Ogawa, K.; Miyake, Y.
2016-01-01
As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony. PMID:27910902
Thepsoonthorn, C; Yokozuka, T; Miura, S; Ogawa, K; Miyake, Y
2016-12-02
As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony.
Cole, Jason C; Ito, Diane; Chen, Yaozhu J; Cheng, Rebecca; Bolognese, Jennifer; Li-McLeod, Josephine
2014-09-04
There is a lack of validated instruments to measure the level of burden of Alzheimer's disease (AD) on caregivers. The Impact of Alzheimer's Disease on Caregiver Questionnaire (IADCQ) is a 12-item instrument with a seven-day recall period that measures AD caregiver's burden across emotional, physical, social, financial, sleep, and time aspects. Primary objectives of this study were to evaluate psychometric properties of IADCQ administered on the Web and to determine most appropriate scoring algorithm. A national sample of 200 unpaid AD caregivers participated in this study by completing the Web-based version of IADCQ and Short Form-12 Health Survey Version 2 (SF-12v2™). The SF-12v2 was used to measure convergent validity of IADCQ scores and to provide an understanding of the overall health-related quality of life of sampled AD caregivers. The IADCQ survey was also completed four weeks later by a randomly selected subgroup of 50 participants to assess test-retest reliability. Confirmatory factor analysis (CFA) was implemented to test the dimensionality of the IADCQ items. Classical item-level and scale-level psychometric analyses were conducted to estimate psychometric characteristics of the instrument. Test-retest reliability was performed to evaluate the instrument's stability and consistency over time. Virtually none (2%) of the respondents had either floor or ceiling effects, indicating the IADCQ covers an ideal range of burden. A single-factor model obtained appropriate goodness of fit and provided evidence that a simple sum score of the 12 items of IADCQ can be used to measure AD caregiver's burden. Scales-level reliability was supported with a coefficient alpha of 0.93 and an intra-class correlation coefficient (for test-retest reliability) of 0.68 (95% CI: 0.50-0.80). Low-moderate negative correlations were observed between the IADCQ and scales of the SF-12v2. The study findings suggest the IADCQ has appropriate psychometric characteristics as a unidimensional, Web-based measure of AD caregiver burden and is supported by strong model fit statistics from CFA, high degree of item-level reliability, good internal consistency, moderate test-retest reliability, and moderate convergent validity. Additional validation of the IADCQ is warranted to ensure invariance between the paper-based and Web-based administration and to determine an appropriate responder definition.
Spurious Solutions Of Nonlinear Differential Equations
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.; Griffiths, D. F.
1992-01-01
Report utilizes nonlinear-dynamics approach to investigate possible sources of errors and slow convergence and non-convergence of steady-state numerical solutions when using time-dependent approach for problems containing nonlinear source terms. Emphasizes implications for development of algorithms in CFD and computational sciences in general. Main fundamental conclusion of study is that qualitative features of nonlinear differential equations cannot be adequately represented by finite-difference method and vice versa.
Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1987-01-01
An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
NASA Astrophysics Data System (ADS)
Sun, Shu-Ting; Li, Xiao-Dong; Zhong, Ren-Xin
2017-10-01
For nonlinear switched discrete-time systems with input constraints, this paper presents an open-closed-loop iterative learning control (ILC) approach, which includes a feedforward ILC part and a feedback control part. Under a given switching rule, the mathematical induction is used to prove the convergence of ILC tracking error in each subsystem. It is demonstrated that the convergence of ILC tracking error is dependent on the feedforward control gain, but the feedback control can speed up the convergence process of ILC by a suitable selection of feedback control gain. A switched freeway traffic system is used to illustrate the effectiveness of the proposed ILC law.
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
Mu, Lin; Wang, Junping; Ye, Xiu
2017-10-04
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
A Pseudo-Temporal Multi-Grid Relaxation Scheme for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
White, J. A.; Morrison, J. H.
1999-01-01
A multi-grid, flux-difference-split, finite-volume code, VULCAN, is presented for solving the elliptic and parabolized form of the equations governing three-dimensional, turbulent, calorically perfect and non-equilibrium chemically reacting flows. The space marching algorithms developed to improve convergence rate and or reduce computational cost are emphasized. The algorithms presented are extensions to the class of implicit pseudo-time iterative, upwind space-marching schemes. A full approximate storage, full multi-grid scheme is also described which is used to accelerate the convergence of a Gauss-Seidel relaxation method. The multi-grid algorithm is shown to significantly improve convergence on high aspect ratio grids.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
NASA Astrophysics Data System (ADS)
Kumar, Love; Sharma, Vishal; Singh, Amarpal
2017-12-01
Wireless Sensor Networks (WSNs) have an assortment of application areas, for instance, civil, military, and video surveillance with restricted power resources and transmission link. To accommodate the massive traffic load in hefty sensor networks is another key issue. Subsequently, there is a necessity to backhaul the sensed information of such networks and prolong the transmission link to access the distinct receivers. Passive Optical Network (PON), a next-generation access technology, comes out as a suitable candidate for the convergence of the sensed data to the core system. The earlier demonstrated work with single-OLT-PON introduces an overloaded buffer akin to video surveillance scenarios. In this paper, to combine the bandwidth potential of PONs with the mobility capability of WSNs, the viability for the convergence of PONs and WSNs incorporating multi-optical line terminals is demonstrated to handle the overloaded OLTs. The existing M/M/1 queue theory with interleaving polling with adaptive cycle time as dynamic bandwidth algorithm is used to shun the probability of packets clash. Further, the proposed multi-sink WSN and multi-OLT PON converged structure is investigated in bidirectional mode analytically and through computer simulations. The observations establish the proposed structure competent to accommodate the colossal data traffic through less time consumption.
Rapid divergence and convergence of life-history in experimentally evolved Drosophila melanogaster.
Burke, Molly K; Barter, Thomas T; Cabral, Larry G; Kezos, James N; Phillips, Mark A; Rutledge, Grant A; Phung, Kevin H; Chen, Richard H; Nguyen, Huy D; Mueller, Laurence D; Rose, Michael R
2016-09-01
Laboratory selection experiments are alluring in their simplicity, power, and ability to inform us about how evolution works. A longstanding challenge facing evolution experiments with metazoans is that significant generational turnover takes a long time. In this work, we present data from a unique system of experimentally evolved laboratory populations of Drosophila melanogaster that have experienced three distinct life-history selection regimes. The goal of our study was to determine how quickly populations of a certain selection regime diverge phenotypically from their ancestors, and how quickly they converge with independently derived populations that share a selection regime. Our results indicate that phenotypic divergence from an ancestral population occurs rapidly, within dozens of generations, regardless of that population's evolutionary history. Similarly, populations sharing a selection treatment converge on common phenotypes in this same time frame, regardless of selection pressures those populations may have experienced in the past. These patterns of convergence and divergence emerged much faster than expected, suggesting that intermediate evolutionary history has transient effects in this system. The results we draw from this system are applicable to other experimental evolution projects, and suggest that many relevant questions can be sufficiently tested on shorter timescales than previously thought. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Panthi, Krishna Kanta; Shrestha, Pawan Kumar
2018-06-01
Total plastic deformation in tunnels passing through weak and schistose rock mass consists of both time-independent and time-dependent deformations. The extent of this total deformation is heavily influenced by the rock mass deformability properties and in situ stress condition prevailing in the area. If in situ stress is not isotropic, the deformation magnitude is not only different along the longitudinal alignment but also along the periphery of the tunnel wall. This manuscript first evaluates the long-term plastic deformation records of three tunnel projects from the Nepal Himalaya and identifies interlink between the time-independent and time-dependent deformations using the convergence law proposed by Sulem et al. (Int J Rock Mech Min Sci Geomech 24(3):145-154, 1987a, Int J Rock Mech Min Sci Geomech 24(3):155-164, 1987b). Secondly, the manuscript attempts to establish a correlation between plastic deformations (tunnel strain) and rock mass deformable properties, support pressure and in situ stress conditions. Finally, patterns of time-independent and time-dependent plastic deformations are also evaluated and discussed. The long-term plastic deformation records of 24 tunnel sections representing four different rock types of three different headrace tunnel cases from Nepal Himalaya are extensively used in this endeavor. The authors believe that the proposed findings will be a step further in analysis of plastic deformations in tunnels passing through weak and schistose rock mass and along the anisotropic stress conditions.
Dynamical complexity of short and noisy time series. Compression-Complexity vs. Shannon entropy
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Balasubramanian, Karthi
2017-07-01
Shannon entropy has been extensively used for characterizing complexity of time series arising from chaotic dynamical systems and stochastic processes such as Markov chains. However, for short and noisy time series, Shannon entropy performs poorly. Complexity measures which are based on lossless compression algorithms are a good substitute in such scenarios. We evaluate the performance of two such Compression-Complexity Measures namely Lempel-Ziv complexity (LZ) and Effort-To-Compress (ETC) on short time series from chaotic dynamical systems in the presence of noise. Both LZ and ETC outperform Shannon entropy (H) in accurately characterizing the dynamical complexity of such systems. For very short binary sequences (which arise in neuroscience applications), ETC has higher number of distinct complexity values than LZ and H, thus enabling a finer resolution. For two-state ergodic Markov chains, we empirically show that ETC converges to a steady state value faster than LZ. Compression-Complexity measures are promising for applications which involve short and noisy time series.
Haines, Brian M.; Aldrich, C. H.; Campbell, J. M.; ...
2017-04-24
In this study, we present the results of high-resolution simulations of the implosion of high-convergence layered indirect-drive inertial confinement fusion capsules of the type fielded on the National Ignition Facility using the xRAGE radiation-hydrodynamics code. In order to evaluate the suitability of xRAGE to model such experiments, we benchmark simulation results against available experimental data, including shock-timing, shock-velocity, and shell trajectory data, as well as hydrodynamic instability growth rates. We discuss the code improvements that were necessary in order to achieve favorable comparisons with these data. Due to its use of adaptive mesh refinement and Eulerian hydrodynamics, xRAGE is particularlymore » well suited for high-resolution study of multi-scale engineering features such as the capsule support tent and fill tube, which are known to impact the performance of high-convergence capsule implosions. High-resolution two-dimensional (2D) simulations including accurate and well-resolved models for the capsule fill tube, support tent, drive asymmetry, and capsule surface roughness are presented. These asymmetry seeds are isolated in order to study their relative importance and the resolution of the simulations enables the observation of details that have not been previously reported. We analyze simulation results to determine how the different asymmetries affect hotspot reactivity, confinement, and confinement time and how these combine to degrade yield. Yield degradation associated with the tent occurs largely through decreased reactivity due to the escape of hot fuel mass from the hotspot. Drive asymmetries and the fill tube, however, degrade yield primarily via burn truncation, as associated instability growth accelerates the disassembly of the hotspot. Finally, modeling all of these asymmetries together in 2D leads to improved agreement with experiment but falls short of explaining the experimentally observed yield degradation, consistent with previous 2D simulations of such capsules.« less
The Problem of Convergence and Commitment in Multigroup Evaluation Planning.
ERIC Educational Resources Information Center
Hausken, Chester A.
This paper outlines a model for multigroup evaluation planning in a rural-education setting wherein the commitment to the structure necessary to evaluate a program is needed on the part of a research and development laboratory, the state departments of education, county supervisors, and the rural schools. To bridge the gap between basic research,…
NASA Astrophysics Data System (ADS)
Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher
2015-07-01
Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.