DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Programmable Iterative Optical Image And Data Processing
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
NASA Astrophysics Data System (ADS)
Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.
2016-06-01
Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
Finite Volume Element (FVE) discretization and multilevel solution of the axisymmetric heat equation
NASA Astrophysics Data System (ADS)
Litaker, Eric T.
1994-12-01
The axisymmetric heat equation, resulting from a point-source of heat applied to a metal block, is solved numerically; both iterative and multilevel solutions are computed in order to compare the two processes. The continuum problem is discretized in two stages: finite differences are used to discretize the time derivatives, resulting is a fully implicit backward time-stepping scheme, and the Finite Volume Element (FVE) method is used to discretize the spatial derivatives. The application of the FVE method to a problem in cylindrical coordinates is new, and results in stencils which are analyzed extensively. Several iteration schemes are considered, including both Jacobi and Gauss-Seidel; a thorough analysis of these schemes is done, using both the spectral radii of the iteration matrices and local mode analysis. Using this discretization, a Gauss-Seidel relaxation scheme is used to solve the heat equation iteratively. A multilevel solution process is then constructed, including the development of intergrid transfer and coarse grid operators. Local mode analysis is performed on the components of the amplification matrix, resulting in the two-level convergence factors for various combinations of the operators. A multilevel solution process is implemented by using multigrid V-cycles; the iterative and multilevel results are compared and discussed in detail. The computational savings resulting from the multilevel process are then discussed.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Rater variables associated with ITER ratings.
Paget, Michael; Wu, Caren; McIlwrick, Joann; Woloschuk, Wayne; Wright, Bruce; McLaughlin, Kevin
2013-10-01
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [β = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [β = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.
Solving large mixed linear models using preconditioned conjugate gradient iteration.
Strandén, I; Lidauer, M
1999-12-01
Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
Improving Drive Files for Vehicle Road Simulations
NASA Astrophysics Data System (ADS)
Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil
2001-09-01
Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.
Nested Krylov methods and preserving the orthogonality
NASA Technical Reports Server (NTRS)
Desturler, Eric; Fokkema, Diederik R.
1993-01-01
Recently the GMRESR inner-outer iteraction scheme for the solution of linear systems of equations was proposed by Van der Vorst and Vuik. Similar methods have been proposed by Axelsson and Vassilevski and Saad (FGMRES). The outer iteration is GCR, which minimizes the residual over a given set of direction vectors. The inner iteration is GMRES, which at each step computes a new direction vector by approximately solving the residual equation. However, the optimality of the approximation over the space of outer search directions is ignored in the inner GMRES iteration. This leads to suboptimal corrections to the solution in the outer iteration, as components of the outer iteration directions may reenter in the inner iteration process. Therefore we propose to preserve the orthogonality relations of GCR in the inner GMRES iteration. This gives optimal corrections; however, it involves working with a singular, non-symmetric operator. We will discuss some important properties, and we will show by experiments that, in terms of matrix vector products, this modification (almost) always leads to better convergence. However, because we do more orthogonalizations, it does not always give an improved performance in CPU-time. Furthermore, we will discuss efficient implementations as well as the truncation possibilities of the outer GCR process. The experimental results indicate that for such methods it is advantageous to preserve the orthogonality in the inner iteration. Of course we can also use iteration schemes other than GMRES as the inner method; methods with short recurrences like GICGSTAB are of interest.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Gauss-Seidel Iterative Method as a Real-Time Pile-Up Solver of Scintillation Pulses
NASA Astrophysics Data System (ADS)
Novak, Roman; Vencelj, Matja¿
2009-12-01
The pile-up rejection in nuclear spectroscopy has been confronted recently by several pile-up correction schemes that compensate for distortions of the signal and subsequent energy spectra artifacts as the counting rate increases. We study here a real-time capability of the event-by-event correction method, which at the core translates to solving many sets of linear equations. Tight time limits and constrained front-end electronics resources make well-known direct solvers inappropriate. We propose a novel approach based on the Gauss-Seidel iterative method, which turns out to be a stable and cost-efficient solution to improve spectroscopic resolution in the front-end electronics. We show the method convergence properties for a class of matrices that emerge in calorimetric processing of scintillation detector signals and demonstrate the ability of the method to support the relevant resolutions. The sole iteration-based error component can be brought below the sliding window induced errors in a reasonable number of iteration steps, thus allowing real-time operation. An area-efficient hardware implementation is proposed that fully utilizes the method's inherent parallelism.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
A Technique for Transient Thermal Testing of Thick Structures
NASA Technical Reports Server (NTRS)
Horn, Thomas J.; Richards, W. Lance; Gong, Leslie
1997-01-01
A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.
Iteration of ultrasound aberration correction methods
NASA Astrophysics Data System (ADS)
Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond
2004-05-01
Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.
NASA Astrophysics Data System (ADS)
Zerr, Robert Joseph
2011-12-01
The integral transport matrix method (ITMM) has been used as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells and between the cells and boundary surfaces. The main goals of this work were to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and performance of the developed methods for increasing number of processes. This project compares the effectiveness of the ITMM with the SI scheme parallelized with the Koch-Baker-Alcouffe (KBA) method. The primary parallel solution method involves a decomposition of the domain into smaller spatial sub-domains, each with their own transport matrices, and coupled together via interface boundary angular fluxes. Each sub-domain has its own set of ITMM operators and represents an independent transport problem. Multiple iterative parallel solution methods have investigated, including parallel block Jacobi (PBJ), parallel red/black Gauss-Seidel (PGS), and parallel GMRES (PGMRES). The fastest observed parallel solution method, PGS, was used in a weak scaling comparison with the PARTISN code. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method without acceleration/preconditioning is not competitive for any problem parameters considered. The best comparisons occur for problems that are difficult for SI DSA, namely highly scattering and optically thick. SI DSA execution time curves are generally steeper than the PGS ones. However, until further testing is performed it cannot be concluded that SI DSA does not outperform the ITMM with PGS even on several thousand or tens of thousands of processors. The PGS method does outperform SI DSA for the periodic heterogeneous layers (PHL) configuration problems. Although this demonstrates a relative strength/weakness between the two methods, the practicality of these problems is much less, further limiting instances where it would be beneficial to select ITMM over SI DSA. The results strongly indicate a need for a robust, stable, and efficient acceleration method (or preconditioner for PGMRES). The spatial multigrid (SMG) method is currently incomplete in that it does not work for all cases considered and does not effectively improve the convergence rate for all values of scattering ratio c or cell dimension h. Nevertheless, it does display the desired trend for highly scattering, optically thin problems. That is, it tends to lower the rate of growth of number of iterations with increasing number of processes, P, while not increasing the number of additional operations per iteration to the extent that the total execution time of the rapidly converging accelerated iterations exceeds that of the slower unaccelerated iterations. A predictive parallel performance model has been developed for the PBJ method. Timing tests were performed such that trend lines could be fitted to the data for the different components and used to estimate the execution times. Applied to the weak scaling results, the model notably underestimates construction time, but combined with a slight overestimation in iterative solution time, the model predicts total execution time very well for large P. It also does a decent job with the strong scaling results, closely predicting the construction time and time per iteration, especially as P increases. Although not shown to be competitive up to 1,024 processing elements with the current state of the art, the parallelized ITMM exhibits promising scaling trends. Ultimately, compared to the KBA method, the parallelized ITMM may be found to be a very attractive option for transport calculations spatially decomposed over several tens of thousands of processes. Acceleration/preconditioning of the parallelized ITMM once developed will improve the convergence rate and improve its competitiveness. (Abstract shortened by UMI.)
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)
2008-12-01
therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Comparisons of Observed Process Quality in German and American Infant/Toddler Programs
ERIC Educational Resources Information Center
Tietze, Wolfgang; Cryer, Debby
2004-01-01
Observed process quality in infant/toddler classrooms was compared in Germany (n = 75) and the USA (n = 219). Process quality was assessed with the Infant/Toddler Environment Rating Scale(ITERS) and parent attitudes about ITERS content with the ITERS Parent Questionnaire (ITERSPQ). The ITERS had comparable reliabilities in the two countries and…
Accelerated iterative beam angle selection in IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan
2016-03-15
Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less
Accelerated iterative beam angle selection in IMRT.
Bangert, Mark; Unkelbach, Jan
2016-03-01
Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Feature Based Retention Time Alignment for Improved HDX MS Analysis
NASA Astrophysics Data System (ADS)
Venable, John D.; Scuba, William; Brock, Ansgar
2013-04-01
An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
NASA Astrophysics Data System (ADS)
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imazawa, R., E-mail: imazawa.ryota@jaea.go.jp; Kawano, Y.; Ono, T.
The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000more » rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.« less
Imazawa, R; Kawano, Y; Ono, T; Itami, K
2016-01-01
The rotating waveplate Stokes polarimeter was developed for ITER (International Thermonuclear Experimental Reactor) poloidal polarimeter. The generalized model of the rotating waveplate Stokes polarimeter and the algorithm suitable for real-time field-programmable gate array (FPGA) processing were proposed. Since the generalized model takes into account each component associated with the rotation of the waveplate, the Stokes parameters can be accurately measured even in unideal condition such as non-uniformity of the waveplate retardation. Experiments using a He-Ne laser showed that the maximum error and the precision of the Stokes parameter were 3.5% and 1.2%, respectively. The rotation speed of waveplate was 20 000 rpm and time resolution of measuring the Stokes parameter was 3.3 ms. Software emulation showed that the real-time measurement of the Stokes parameter with time resolution of less than 10 ms is possible by using several FPGA boards. Evaluation of measurement capability using a far-infrared laser which ITER poloidal polarimeter will use concluded that measurement error will be reduced by a factor of nine.
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
Layout compliance for triple patterning lithography: an iterative approach
NASA Astrophysics Data System (ADS)
Yu, Bei; Garreton, Gilda; Pan, David Z.
2014-10-01
As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
Modern Workflow Full Waveform Inversion Applied to North America and the Northern Atlantic
NASA Astrophysics Data System (ADS)
Krischer, Lion; Fichtner, Andreas; Igel, Heiner
2015-04-01
We present the current state of a new seismic tomography model obtained using full waveform inversion of the crustal and upper mantle structure beneath North America and the Northern Atlantic, including the westernmost part of Europe. Parts of the eastern portion of the initial model consists of previous models by Fichtner et al. (2013) and Rickers et al. (2013). The final results of this study will contribute to the 'Comprehensive Earth Model' being developed by the Computational Seismology group at ETH Zurich. Significant challenges include the size of the domain, the uneven event and station coverage, and the strong east-west alignment of seismic ray paths across the North Atlantic. We use as much data as feasible, resulting in several thousand recordings per event depending on the receivers deployed at the earthquakes' origin times. To manage such projects in a reproducible and collaborative manner, we, as tomographers, should abandon ad-hoc scripts and one-time programs, and adopt sustainable and reusable solutions. Therefore we developed the LArge-scale Seismic Inversion Framework (LASIF - http://lasif.net), an open-source toolbox for managing seismic data in the context of non-linear iterative inversions that greatly reduces the time to research. Information on the applied processing, modelling, iterative model updating, what happened during each iteration, and so on are systematically archived. This results in a provenance record of the final model which in the end significantly enhances the reproducibility of iterative inversions. Additionally, tools for automated data download across different data centers, window selection, misfit measurements, parallel data processing, and input file generation for various forward solvers are provided.
Process control strategy for ITER central solenoid operation
NASA Astrophysics Data System (ADS)
Maekawa, R.; Takami, S.; Iwamoto, A.; Chang, H.-S.; Forgeas, A.; Chalifour, M.
2016-12-01
ITER Central Solenoid (CS) pulse operation induces significant flow disturbance in the forced-flow Supercritical Helium (SHe) cooling circuit, which could impact primarily on the operation of cold circulator (SHe centrifugal pump) in Auxiliary Cold Box (ACB). Numerical studies using Venecia®, SUPERMAGNET and 4C have identified reverse flow at the CS module inlet due to the substantial thermal energy deposition at the inner-most winding. To assess the reliable operation of ACB-CS (dedicated ACB for CS), the process analyses have been conducted with a dynamic process simulation model developed by Cryogenic Process REal-time SimulaTor (C-PREST). As implementing process control of hydrodynamic instability, several strategies have been applied to evaluate their feasibility. The paper discusses control strategy to protect the centrifugal type cold circulator/compressor operations and its impact on the CS cooling.
NASA Astrophysics Data System (ADS)
Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho
2017-10-01
This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.
Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan
2011-12-01
In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.
Photonic Breast Tomography and Tumor Aggressiveness Assessment
2009-07-01
similar to the time-reversal matrix used in the general area of array processing for acoustic and radar time-reversal imaging [9]. Both OIPCA and TROT...iterative time-reversal process: analysis of the convergence,” J. Acoust . Soc. Am. 97, 62 (1995). 10. N. Kroman, J. Wohlfahrt, H. T. Mouridsen, and M...Gayen1 1The Institute for Ultrafast Spectroscopy and Lasers, Physics Department The City College and the Graduate Centre of The City University of
ITER Central Solenoid Module Fabrication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, John
The fabrication of the modules for the ITER Central Solenoid (CS) has started in a dedicated production facility located in Poway, California, USA. The necessary tools have been designed, built, installed, and tested in the facility to enable the start of production. The current schedule has first module fabrication completed in 2017, followed by testing and subsequent shipment to ITER. The Central Solenoid is a key component of the ITER tokamak providing the inductive voltage to initiate and sustain the plasma current and to position and shape the plasma. The design of the CS has been a collaborative effort betweenmore » the US ITER Project Office (US ITER), the international ITER Organization (IO) and General Atomics (GA). GA’s responsibility includes: completing the fabrication design, developing and qualifying the fabrication processes and tools, and then completing the fabrication of the seven 110 tonne CS modules. The modules will be shipped separately to the ITER site, and then stacked and aligned in the Assembly Hall prior to insertion in the core of the ITER tokamak. A dedicated facility in Poway, California, USA has been established by GA to complete the fabrication of the seven modules. Infrastructure improvements included thick reinforced concrete floors, a diesel generator for backup power, along with, cranes for moving the tooling within the facility. The fabrication process for a single module requires approximately 22 months followed by five months of testing, which includes preliminary electrical testing followed by high current (48.5 kA) tests at 4.7K. The production of the seven modules is completed in a parallel fashion through ten process stations. The process stations have been designed and built with most stations having completed testing and qualification for carrying out the required fabrication processes. The final qualification step for each process station is achieved by the successful production of a prototype coil. Fabrication of the first ITER module is in progress. The seven modules will be individually shipped to Cadarache, France upon their completion. This paper describes the processes and status of the fabrication of the CS Modules for ITER.« less
NASA Astrophysics Data System (ADS)
Furuichi, Mikito; Nishiura, Daisuke
2017-10-01
We developed dynamic load-balancing algorithms for Particle Simulation Methods (PSM) involving short-range interactions, such as Smoothed Particle Hydrodynamics (SPH), Moving Particle Semi-implicit method (MPS), and Discrete Element method (DEM). These are needed to handle billions of particles modeled in large distributed-memory computer systems. Our method utilizes flexible orthogonal domain decomposition, allowing the sub-domain boundaries in the column to be different for each row. The imbalances in the execution time between parallel logical processes are treated as a nonlinear residual. Load-balancing is achieved by minimizing the residual within the framework of an iterative nonlinear solver, combined with a multigrid technique in the local smoother. Our iterative method is suitable for adjusting the sub-domain frequently by monitoring the performance of each computational process because it is computationally cheaper in terms of communication and memory costs than non-iterative methods. Numerical tests demonstrated the ability of our approach to handle workload imbalances arising from a non-uniform particle distribution, differences in particle types, or heterogeneous computer architecture which was difficult with previously proposed methods. We analyzed the parallel efficiency and scalability of our method using Earth simulator and K-computer supercomputer systems.
NASA Astrophysics Data System (ADS)
Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.
2013-02-01
Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
NASA Astrophysics Data System (ADS)
Santoli, Salvatore
1994-01-01
The mechanistic interpretation of the communication process between cognitive hierarchical systems as an iterated pair of convolutions between the incoming discrete time series signals and the chaotic dynamics (CD) at the nm-scale of the perception (energy) wetware level, with the consequent feeding of the resulting collective properties to the CD software (symbolic) level, shows that the category of quality, largely present in Galilean quantitative-minded science, is to be increasingly made into quantity for finding optimum common codes for communication between different intelligent beings. The problem is similar to that solved by biological evolution, of communication between the conscious logic brain and the underlying unfelt ultimate extra-logical processes, as well as to the problem of the mind-body or the structure-function dichotomies. Perspective cybernated nanotechnological and/or nanobiological interfaces, and time evolution of the 'contact language' (the iterated dialogic process) as a self-organising system might improve human-alien understanding.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Planning as an Iterative Process
NASA Technical Reports Server (NTRS)
Smith, David E.
2012-01-01
Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.
Loads specification and embedded plate definition for the ITER cryoline system
NASA Astrophysics Data System (ADS)
Badgujar, S.; Benkheira, L.; Chalifour, M.; Forgeas, A.; Shah, N.; Vaghela, H.; Sarkar, B.
2015-12-01
ITER cryolines (CLs) are complex network of vacuum-insulated multi and single process pipe lines, distributed in three different areas at ITER site. The CLs will support different operating loads during the machine life-time; either considered as nominal, occasional or exceptional. The major loads, which form the design basis are inertial, pressure, temperature, assembly, magnetic, snow, wind, enforced relative displacement and are put together in loads specification. Based on the defined load combinations, conceptual estimation of reaction loads have been carried out for the lines located inside the Tokamak building. Adequate numbers of embedded plates (EPs) per line have been defined and integrated in the building design. The finalization of building EPs to support the lines, before the detailed design, is one of the major design challenges as the usual logic of the design may alter. At the ITER project level, it was important to finalize EPs to allow adequate design and timely availability of the Tokamak building. The paper describes the single loads, load combinations considered in load specification and the approach for conceptual load estimation and selection of EPs for Toroidal Field (TF) Cryoline as an example by converting the load combinations in two main load categories; pressure and seismic.
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
Iterative reactions of transient boronic acids enable sequential C-C bond formation
NASA Astrophysics Data System (ADS)
Battilocchio, Claudio; Feist, Florian; Hafner, Andreas; Simon, Meike; Tran, Duc N.; Allwood, Daniel M.; Blakemore, David C.; Ley, Steven V.
2016-04-01
The ability to form multiple carbon-carbon bonds in a controlled sequence and thus rapidly build molecular complexity in an iterative fashion is an important goal in modern chemical synthesis. In recent times, transition-metal-catalysed coupling reactions have dominated in the development of C-C bond forming processes. A desire to reduce the reliance on precious metals and a need to obtain products with very low levels of metal impurities has brought a renewed focus on metal-free coupling processes. Here, we report the in situ preparation of reactive allylic and benzylic boronic acids, obtained by reacting flow-generated diazo compounds with boronic acids, and their application in controlled iterative C-C bond forming reactions is described. Thus far we have shown the formation of up to three C-C bonds in a sequence including the final trapping of a reactive boronic acid species with an aldehyde to generate a range of new chemical structures.
Model for Simulating a Spiral Software-Development Process
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.
Chhatbar, Pratik Y.; Kara, Prakash
2013-01-01
Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.
2006-03-01
Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm, which is an artificially intelligent optimization method with a high probability to obtain global optimization. From preliminary results, the required iterations were reduced from 5 to 2 for a simple dumbbell-shape layout.
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
The Iterative Design Process in Research and Development: A Work Experience Paper
NASA Technical Reports Server (NTRS)
Sullivan, George F. III
2013-01-01
The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.
Combined dry plasma etching and online metrology for manufacturing highly focusing x-ray mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berujon, S., E-mail: berujon@esrf.eu; Ziegler, E., E-mail: ziegler@esrf.eu; Cunha, S. da
A new figuring station was designed and installed at the ESRF beamline BM05. It allows the figuring of mirrors within an iterative process combining the advantage of online metrology with dry etching. The complete process takes place under a vacuum environment to minimize surface contamination while non-contact surfacing tools open up the possibility of performing at-wavelength metrology and eliminating placement errors. The aim is to produce mirrors whose slopes do not deviate from the stigmatic profile by more than 0.1 µrad rms while keeping surface roughness in the acceptable limit of 0.1-0.2 nm rms. The desired elliptical mirror surface shapemore » can be achieved in a few iterations in about a one day time span. This paper describes some of the important aspects of the process regarding both the online metrology and the etching process.« less
The Use of Computer-Assisted Identification of ARIMA Time-Series.
ERIC Educational Resources Information Center
Brown, Roger L.
This study was conducted to determine the effects of using various levels of tutorial statistical software for the tentative identification of nonseasonal ARIMA models, a statistical technique proposed by Box and Jenkins for the interpretation of time-series data. The Box-Jenkins approach is an iterative process encompassing several stages of…
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
Iterative near-term ecological forecasting: Needs, opportunities, and challenges
Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.
2018-01-01
Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.
Iterative near-term ecological forecasting: Needs, opportunities, and challenges.
Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P
2018-02-13
Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
NASA Astrophysics Data System (ADS)
Li, Zhifu; Hu, Yueming; Li, Di
2016-08-01
For a class of linear discrete-time uncertain systems, a feedback feed-forward iterative learning control (ILC) scheme is proposed, which is comprised of an iterative learning controller and two current iteration feedback controllers. The iterative learning controller is used to improve the performance along the iteration direction and the feedback controllers are used to improve the performance along the time direction. First of all, the uncertain feedback feed-forward ILC system is presented by an uncertain two-dimensional Roesser model system. Then, two robust control schemes are proposed. One can ensure that the feedback feed-forward ILC system is bounded-input bounded-output stable along time direction, and the other can ensure that the feedback feed-forward ILC system is asymptotically stable along time direction. Both schemes can guarantee the system is robust monotonically convergent along the iteration direction. Third, the robust convergent sufficient conditions are given, which contains a linear matrix inequality (LMI). Moreover, the LMI can be used to determine the gain matrix of the feedback feed-forward iterative learning controller. Finally, the simulation results are presented to demonstrate the effectiveness of the proposed schemes.
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.
Bedggood, Phillip; Metha, Andrew
2010-01-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry
NASA Astrophysics Data System (ADS)
Bedggood, Phillip; Metha, Andrew
2010-11-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
Iterated function systems for DNA replication
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2017-10-01
The kinetic equations of DNA replication are shown to be exactly solved in terms of iterated function systems, running along the template sequence and giving the statistical properties of the copy sequences, as well as the kinetic and thermodynamic properties of the replication process. With this method, different effects due to sequence heterogeneity can be studied, in particular, a transition between linear and sublinear growths in time of the copies, and a transition between continuous and fractal distributions of the local velocities of the DNA polymerase along the template. The method is applied to the human mitochondrial DNA polymerase γ without and with exonuclease proofreading.
Integrated Collaborative Model in Research and Education with Emphasis on Small Satellite Technology
1996-01-01
feedback; the number of iterations in a complete iteration is referred to as loop depth or iteration depth, g (i). A data packet or packet is data...loop depth, g (i)) is either a finite (constant or variable) or an infinite value. 1) Finite loop depth, variable number of iterations Some problems...design time. The time needed for the first packet to leave and a new initial data to be introduced to the iteration is min(R * ( g (k) * (N+I) + k-1
Method and apparatus for iterative lysis and extraction of algae
Chew, Geoffrey; Boggs, Tabitha; Dykes, Jr., H. Waite H.; Doherty, Stephen J.
2015-12-01
A method and system for processing algae involves the use of an ionic liquid-containing clarified cell lysate to lyse algae cells. The resulting crude cell lysate may be clarified and subsequently used to lyse algae cells. The process may be repeated a number of times before a clarified lysate is separated into lipid and aqueous phases for further processing and/or purification of desired products.
Method for distinguishing multiple targets using time-reversal acoustics
Berryman, James G.
2004-06-29
A method for distinguishing multiple targets using time-reversal acoustics. Time-reversal acoustics uses an iterative process to determine the optimum signal for locating a strongly reflecting target in a cluttered environment. An acoustic array sends a signal into a medium, and then receives the returned/reflected signal. This returned/reflected signal is then time-reversed and sent back into the medium again, and again, until the signal being sent and received is no longer changing. At that point, the array has isolated the largest eigenvalue/eigenvector combination and has effectively determined the location of a single target in the medium (the one that is most strongly reflecting). After the largest eigenvalue/eigenvector combination has been determined, to determine the location of other targets, instead of sending back the same signals, the method sends back these time reversed signals, but half of them will also be reversed in sign. There are various possibilities for choosing which half to do sign reversal. The most obvious choice is to reverse every other one in a linear array, or as in a checkerboard pattern in 2D. Then, a new send/receive, send-time reversed/receive iteration can proceed. Often, the first iteration in this sequence will be close to the desired signal from a second target. In some cases, orthogonalization procedures must be implemented to assure the returned signals are in fact orthogonal to the first eigenvector found.
Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms
NASA Astrophysics Data System (ADS)
Mohan, K. Aditya
2017-10-01
4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.
Fravolini, M L; Fabietti, P G
2014-01-01
This paper proposes a scheme for the control of the blood glucose in subjects with type-1 diabetes mellitus based on the subcutaneous (s.c.) glucose measurement and s.c. insulin administration. The tuning of the controller is based on an iterative learning strategy that exploits the repetitiveness of the daily feeding habit of a patient. The control consists of a mixed feedback and feedforward contribution whose parameters are tuned through an iterative learning process that is based on the day-by-day automated analysis of the glucose response to the infusion of exogenous insulin. The scheme does not require any a priori information on the patient insulin/glucose response, on the meal times and on the amount of ingested carbohydrates (CHOs). Thanks to the learning mechanism the scheme is able to improve its performance over time. A specific logic is also introduced for the detection and prevention of possible hypoglycaemia events. The effectiveness of the methodology has been validated using long-term simulation studies applied to a set of nine in silico patients considering realistic uncertainties on the meal times and on the quantities of ingested CHOs.
Accurate Micro-Tool Manufacturing by Iterative Pulsed-Laser Ablation
NASA Astrophysics Data System (ADS)
Warhanek, Maximilian; Mayr, Josef; Dörig, Christian; Wegener, Konrad
2017-12-01
Iterative processing solutions, including multiple cycles of material removal and measurement, are capable of achieving higher geometric accuracy by compensating for most deviations manifesting directly on the workpiece. Remaining error sources are the measurement uncertainty and the repeatability of the material-removal process including clamping errors. Due to the lack of processing forces, process fluids and wear, pulsed-laser ablation has proven high repeatability and can be realized directly on a measuring machine. This work takes advantage of this possibility by implementing an iterative, laser-based correction process for profile deviations registered directly on an optical measurement machine. This way efficient iterative processing is enabled, which is precise, applicable for all tool materials including diamond and eliminates clamping errors. The concept is proven by a prototypical implementation on an industrial tool measurement machine and a nanosecond fibre laser. A number of measurements are performed on both the machine and the processed workpieces. Results show production deviations within 2 μm diameter tolerance.
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Upwind relaxation methods for the Navier-Stokes equations using inner iterations
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.
1992-01-01
A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.
NASA Astrophysics Data System (ADS)
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
FBILI method for multi-level line transfer
NASA Astrophysics Data System (ADS)
Kuzmanovska, O.; Atanacković, O.; Faurobert, M.
2017-07-01
Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.
Department of Defense Costing References Web. Phase 1. Establishing the Foundation.
1997-03-01
a functional economic analysis under one set of constraints and having to repeat the entire process for the MAISRC. Recommendations for automated...MAISRC s acquisition oversight process . The cost and cycle time for each iteration can be in the order of $300,000 and 6 months, respectively...Institute resources were expected to become available at the conclusion of another BPR project. The contents list for the first Business Process
Effect of thick blanket modules on neoclassical tearing mode locking in ITER
La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.
2016-11-03
The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less
Effect of thick blanket modules on neoclassical tearing mode locking in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.
The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Fraboni, Michael; Moller, Trisha
2008-01-01
Fractal geometry offers teachers great flexibility: It can be adapted to the level of the audience or to time constraints. Although easily explained, fractal geometry leads to rich and interesting mathematical complexities. In this article, the authors describe fractal geometry, explain the process of iteration, and provide a sample exercise.…
Directionally Solidified Eutectic Ceramics for Multifunctional Aerospace Applications
2013-01-01
eutectic materials development through a new initiative entitled Boride Eutectic Project. These results first time organize and populate materials...property databases, and utilize an iterative feedback routine to constantly improve the design process of the boride eutectics LaB6-MeB2 (Me = Zr, Hf, Ti
Iteration and Prototyping in Creating Technical Specifications.
ERIC Educational Resources Information Center
Flynt, John P.
1994-01-01
Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory
NASA Astrophysics Data System (ADS)
Nagornyi, V. D.; Svitlov, S.; Araya, A.
2016-04-01
Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3 < p < 4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
Campbell, Megan M; Susser, Ezra; Mall, Sumaya; Mqulwana, Sibonile G; Mndini, Michael M; Ntola, Odwa A; Nagdee, Mohamed; Zingela, Zukiswa; Van Wyk, Stephanus; Stein, Dan J
2017-01-01
Obtaining informed consent is a great challenge in global health research. There is a need for tools that can screen for and improve potential research participants' understanding of the research study at the time of recruitment. Limited empirical research has been conducted in low and middle income countries, evaluating informed consent processes in genomics research. We sought to investigate the quality of informed consent obtained in a South African psychiatric genomics study. A Xhosa language version of the University of California, San Diego Brief Assessment of Capacity to Consent Questionnaire (UBACC) was used to screen for capacity to consent and improve understanding through iterative learning in a sample of 528 Xhosa people with schizophrenia and 528 controls. We address two questions: firstly, whether research participants' understanding of the research study improved through iterative learning; and secondly, what were predictors for better understanding of the research study at the initial screening? During screening 290 (55%) cases and 172 (33%) controls scored below the 14.5 cut-off for acceptable understanding of the research study elements, however after iterative learning only 38 (7%) cases and 13 (2.5%) controls continued to score below this cut-off. Significant variables associated with increased understanding of the consent included the psychiatric nurse recruiter conducting the consent screening, higher participant level of education, and being a control. The UBACC proved an effective tool to improve understanding of research study elements during consent, for both cases and controls. The tool holds utility for complex studies such as those involving genomics, where iterative learning can be used to make significant improvements in understanding of research study elements. The UBACC may be particularly important in groups with severe mental illness and lower education levels. Study recruiters play a significant role in managing the quality of the informed consent process.
2.5D transient electromagnetic inversion with OCCAM method
NASA Astrophysics Data System (ADS)
Li, R.; Hu, X.
2016-12-01
In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.
Cross Sectional Study of Agile Software Development Methods and Project Performance
ERIC Educational Resources Information Center
Lambert, Tracy
2011-01-01
Agile software development methods, characterized by delivering customer value via incremental and iterative time-boxed development processes, have moved into the mainstream of the Information Technology (IT) industry. However, despite a growing body of research which suggests that a predictive manufacturing approach, with big up-front…
Diagnostics of Dielectric Materials with Several Relaxation Times
NASA Astrophysics Data System (ADS)
Karpov, A. G.; Klemeshev, V. A.
2018-04-01
A set of means for detection and preprocessing of dielectrometric information has been suggested for studying the polarization/depolarization of dielectrics. Special attention has been paid to the processing of dielectrometric data for inhomogeneous materials using dielectric diagrams. Rapid analysis has been carried out the results of which can be used as initial approximations in more accurate (more complicated and time-consuming) iterative algorithms for model fitting.
Baseline Architecture of ITER Control System
NASA Astrophysics Data System (ADS)
Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.
2011-08-01
The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Strawhacker, C.; Pulsifer, P. L.; Thurmes, N.
2016-12-01
The United States National Science Foundation funded PermaData project led by the National Snow and Ice Data Center (NSIDC) with a team from the Global Terrestrial Network for Permafrost (GTN-P) aimed to improve permafrost data access and discovery. We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the GTN-P. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets. Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs. Originally it was written to capture a scientist's personal, iterative, data manipulation and quality control process of visually and programmatically iterating through inconsistent input data, examining it to find problems, adding operations to address the problems, and rerunning until the data could be translated into the GTN-P standard format. Iterative development of this tool led to a Fortran/Python hybrid then, with consideration of users, licensing, version control, packaging, and workflow, to a publically available, robust, usable application. Transitioning to Python allowed the use of open source frameworks for the workflow core and integration with a javascript graphical workflow interface. DIT is targeted to automatically handle 90% of the data processing for field scientists, modelers, and non-discipline scientists. It is available as an open source tool in GitHub packaged for a subset of Mac, Windows, and UNIX systems as a desktop application with a graphical workflow manager. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
NASA Astrophysics Data System (ADS)
Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.
2017-04-01
ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings
NASA Astrophysics Data System (ADS)
Hussein Maibed, Zena
2018-05-01
The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
NASA Astrophysics Data System (ADS)
Ialenti, Vincent Francis
This ethnography reconsiders nuclear waste risk's deep time horizons' often-sensationalized aesthetics of horror, sublimity, and awe. It does so by tracking how Finland's nuclear energy and waste experts made visions of distant future Finlands appear more intelligible through mundane corporate, regulatory, financial, and technoscientific practices. Each chapter unpacks how informants iterated and reiterated traces of the very familiar to establish shared grounds of continuity for moving forward in time. Chapter 1 explores how Finland's energy sector's "mankala" cooperative corporate form was iterated and reiterated to give shape to political and financial time horizons. Chapter 2 explores how workplace role distinctions between recruit/retiree and junior/senior were iterated and reiterated to reckon nuclear personnel successions' intergenerational horizons. Chapter 3 explores how input/output and part/whole distinctions were iterated and reiterated to help model distant future worlds in a portfolio of "Safety Case" evidence made to demonstrate the Olkiluoto repository's safety to Finnish nuclear regulator STUK. Chapter 4 explores how Safety Case experts iterated and reiterated memories of a deceased predecessor figure in everyday engagements with deep time. What emerges are three insights about how futures attain discernible features--insights about the "continuity," "thinkability," and "extensibility" of expert thought--that, I argue, can help twenty-first century experts better navigate not only deep time, but also unknown futures of nuclear technologies, planetary environment, and expertise itself.
Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing
NASA Astrophysics Data System (ADS)
Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng
2017-05-01
Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.
Zhao, Jing; Zong, Haili
2018-01-01
In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.
Wavefront correction with Kalman filtering for the WFIRST-AFTA coronagraph instrument
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler D.
2015-09-01
The only way to characterize most exoplanets spectrally is via direct imaging. For example, the Coronagraph Instrument (CGI) on the proposed Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets (WFIRST-AFTA) mission plans to image and characterize several cool gas giants around nearby stars. The integration time on these faint exoplanets will be many hours to days. A crucial assumption for mission planning is that the time required to dig a dark hole (a region of high star-to-planet contrast) with deformable mirrors is small compared to science integration time. The science camera must be used as the wavefront sensor to avoid non-common path aberrations, but this approach can be quite time intensive. Several estimation images are required to build an estimate of the starlight electric field before it can be partially corrected, and this process is repeated iteratively until high contrast is reached. Here we present simulated results of batch process and recursive wavefront estimation schemes. In particular, we test a Kalman filter and an iterative extended Kalman filter (IEKF) to reduce the total exposure time and improve the robustness of wavefront correction for the WFIRST-AFTA CGI. An IEKF or other nonlinear filter also allows recursive, real-time estimation of sources incoherent with the star, such as exoplanets and disks, and may therefore reduce detection uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricapito, I.; Calderoni, P.; Poitevin, Y.
2015-03-15
Tritium processing technologies of the two European Test Blanket Systems (TBS), HCLL (Helium Cooled Lithium Lead) and HCPB (Helium Cooled Pebble Bed), play an essential role in meeting the main objectives of the TBS experimental campaign in ITER. The compliancy with the ITER interface requirements, in terms of space availability, service fluids, limits on tritium release, constraints on maintenance, is driving the design of the TBS tritium processing systems. Other requirements come from the characteristics of the relevant test blanket module and the scientific programme that has to be developed and implemented. This paper identifies the main requirements for themore » design of the TBS tritium systems and equipment and, at the same time, provides an updated overview on the current design status, mainly focusing onto the tritium extractor from Pb-16Li and TBS tritium accountancy. Considerations are also given on the possible extrapolation to DEMO breeding blanket. (authors)« less
NASA Astrophysics Data System (ADS)
Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.
2008-11-01
We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Language Evolution by Iterated Learning with Bayesian Agents
ERIC Educational Resources Information Center
Griffiths, Thomas L.; Kalish, Michael L.
2007-01-01
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute…
Design for disassembly and sustainability assessment to support aircraft end-of-life treatment
NASA Astrophysics Data System (ADS)
Savaria, Christian
Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None
Ask-the-expert: Active Learning Based Knowledge Discovery Using the Expert
NASA Technical Reports Server (NTRS)
Das, Kamalika; Avrekh, Ilya; Matthews, Bryan; Sharma, Manali; Oza, Nikunj
2017-01-01
Often the manual review of large data sets, either for purposes of labeling unlabeled instances or for classifying meaningful results from uninteresting (but statistically significant) ones is extremely resource intensive, especially in terms of subject matter expert (SME) time. Use of active learning has been shown to diminish this review time significantly. However, since active learning is an iterative process of learning a classifier based on a small number of SME-provided labels at each iteration, the lack of an enabling tool can hinder the process of adoption of these technologies in real-life, in spite of their labor-saving potential. In this demo we present ASK-the-Expert, an interactive tool that allows SMEs to review instances from a data set and provide labels within a single framework. ASK-the-Expert is powered by an active learning algorithm for training a classifier in the backend. We demonstrate this system in the context of an aviation safety application, but the tool can be adopted to work as a simple review and labeling tool as well, without the use of active learning.
Ask-the-Expert: Active Learning Based Knowledge Discovery Using the Expert
NASA Technical Reports Server (NTRS)
Das, Kamalika
2017-01-01
Often the manual review of large data sets, either for purposes of labeling unlabeled instances or for classifying meaningful results from uninteresting (but statistically significant) ones is extremely resource intensive, especially in terms of subject matter expert (SME) time. Use of active learning has been shown to diminish this review time significantly. However, since active learning is an iterative process of learning a classifier based on a small number of SME-provided labels at each iteration, the lack of an enabling tool can hinder the process of adoption of these technologies in real-life, in spite of their labor-saving potential. In this demo we present ASK-the-Expert, an interactive tool that allows SMEs to review instances from a data set and provide labels within a single framework. ASK-the-Expert is powered by an active learning algorithm for training a classifier in the back end. We demonstrate this system in the context of an aviation safety application, but the tool can be adopted to work as a simple review and labeling tool as well, without the use of active learning.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
Jini service to reconstruct tomographic data
NASA Astrophysics Data System (ADS)
Knoll, Peter; Mirzaei, S.; Koriska, K.; Koehn, H.
2002-06-01
A number of imaging systems rely on the reconstruction of a 3- dimensional model from its projections through the process of computed tomography (CT). In medical imaging, for example magnetic resonance imaging (MRI), positron emission tomography (PET), and Single Computer Tomography (SPECT) acquire two-dimensional projections of a three dimensional projections of a three dimensional object. In order to calculate the 3-dimensional representation of the object, i.e. its voxel distribution, several reconstruction algorithms have been developed. Currently, mainly two reconstruct use: the filtered back projection(FBP) and iterative methods. Although the quality of iterative reconstructed SPECT slices is better than that of FBP slices, such iterative algorithms are rarely used for clinical routine studies because of their low availability and increased reconstruction time. We used Jini and a self-developed iterative reconstructions algorithm to design and implement a Jini reconstruction service. With this service, the physician selects the patient study from a database and a Jini client automatically discovers the registered Jini reconstruction services in the department's Intranet. After downloading the proxy object the this Jini service, the SPECT acquisition data are reconstructed. The resulting transaxial slices are visualized using a Jini slice viewer, which can be used for various imaging modalities.
Development and Evaluation of an Intuitive Operations Planning Process
2006-03-01
designed to be iterative and also prescribes the way in which iterations should occur. On the other hand, participants’ perceived level of trust and...16 4. DESIGN AND METHOD OF THE EXPERIMENTAL EVALUATION OF THE INTUITIVE PLANNING PROCESS...20 4.1.3 Design
NASA Astrophysics Data System (ADS)
Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.
2018-02-01
In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).
NASA Astrophysics Data System (ADS)
Trujillo Bueno, Javier; Manso Sainz, Rafael
1999-05-01
This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.
NASA Astrophysics Data System (ADS)
Sun, Shu-Ting; Li, Xiao-Dong; Zhong, Ren-Xin
2017-10-01
For nonlinear switched discrete-time systems with input constraints, this paper presents an open-closed-loop iterative learning control (ILC) approach, which includes a feedforward ILC part and a feedback control part. Under a given switching rule, the mathematical induction is used to prove the convergence of ILC tracking error in each subsystem. It is demonstrated that the convergence of ILC tracking error is dependent on the feedforward control gain, but the feedback control can speed up the convergence process of ILC by a suitable selection of feedback control gain. A switched freeway traffic system is used to illustrate the effectiveness of the proposed ILC law.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices
Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher
2015-01-01
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667
Finite-size effects and switching times for Moran process with mutation.
DeVille, Lee; Galiardi, Meghan
2017-04-01
We consider the Moran process with two populations competing under an iterated Prisoner's Dilemma in the presence of mutation, and concentrate on the case where there are multiple evolutionarily stable strategies. We perform a complete bifurcation analysis of the deterministic system which arises in the infinite population size. We also study the Master equation and obtain asymptotics for the invariant distribution and metastable switching times for the stochastic process in the case of large but finite population. We also show that the stochastic system has asymmetries in the form of a skew for parameter values where the deterministic limit is symmetric.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Interactive computer graphics system for structural sizing and analysis of aircraft structures
NASA Technical Reports Server (NTRS)
Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.
1975-01-01
A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.
A Calculus of Macro-Events: Progress Report
2000-01-01
1410, USA iliano@itd.nrl.navy.mil Angelo Montanari Dipartimento di Matematica e Informatica Universita di Udine Via delle Scienze, 206 { 33100 Udine...and process iteration. This proposal builds on work by Chittaro and Montanari [10] on mod- eling discrete processes. The set of constructors of the...situations, in many cases the occurrence of an event happens over a period of time [24]. Capturing this possibility enables ner mod- els , as we can now
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
Not so Complex: Iteration in the Complex Plane
ERIC Educational Resources Information Center
O'Dell, Robin S.
2014-01-01
The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…
Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner
2001-01-01
Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods
Smith, David S.; Gore, John C.; Yankeelov, Thomas E.; Welch, E. Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 40962 or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 10242 and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images. PMID:22481908
Real-Time Compressive Sensing MRI Reconstruction Using GPU Computing and Split Bregman Methods.
Smith, David S; Gore, John C; Yankeelov, Thomas E; Welch, E Brian
2012-01-01
Compressive sensing (CS) has been shown to enable dramatic acceleration of MRI acquisition in some applications. Being an iterative reconstruction technique, CS MRI reconstructions can be more time-consuming than traditional inverse Fourier reconstruction. We have accelerated our CS MRI reconstruction by factors of up to 27 by using a split Bregman solver combined with a graphics processing unit (GPU) computing platform. The increases in speed we find are similar to those we measure for matrix multiplication on this platform, suggesting that the split Bregman methods parallelize efficiently. We demonstrate that the combination of the rapid convergence of the split Bregman algorithm and the massively parallel strategy of GPU computing can enable real-time CS reconstruction of even acquisition data matrices of dimension 4096(2) or more, depending on available GPU VRAM. Reconstruction of two-dimensional data matrices of dimension 1024(2) and smaller took ~0.3 s or less, showing that this platform also provides very fast iterative reconstruction for small-to-moderate size images.
TRUST84. Sat-Unsat Flow in Deformable Media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.
1984-11-01
TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less
So, Noel F; Rubin, Devon I; Jones, Lyell K; Litchy, William J; Sorenson, Eric J
2013-12-01
Repetitive discharges may be recorded during nerve conduction studies (NCS) or during needle electromyography in a muscle at rest. Repetitive discharges that occur during voluntary activation and are time-locked to voluntary motor unit potentials (MUP) have not been described. Retrospective review of motor unit potential induced repetitive discharges (MIRDs) identified in the EMG laboratory. Characteristics of each MIRD, patient demographics, other EMG findings in the same muscle, and electrophysiological diagnosis were analyzed. MIRDs were observed in 15 patients. The morphology and number of spikes and duration of MIRDs varied. The discharges fired at rates of 50-200 Hz. All but 2 patients had EMG findings of a chronic neurogenic disorder. MIRDs are rare iterative discharges time-locked to a voluntary MUP. The pathophysiology of MIRDs is unclear, but their presence may indicate a chronic neurogenic process. Copyright © 2013 Wiley Periodicals, Inc.
From Amorphous to Defined: Balancing the Risks of Spiral Development
2007-04-30
630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work packages1 1 1 Work started and...active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1] : JavelinCalibration work packages3 3 3 3...Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt [Use,Iter1] : JavelinCalibration
Simulating transient dynamics of the time-dependent time fractional Fokker-Planck systems
NASA Astrophysics Data System (ADS)
Kang, Yan-Mei
2016-09-01
For a physically realistic type of time-dependent time fractional Fokker-Planck (FP) equation, derived as the continuous limit of the continuous time random walk with time-modulated Boltzmann jumping weight, a semi-analytic iteration scheme based on the truncated (generalized) Fourier series is presented to simulate the resultant transient dynamics when the external time modulation is a piece-wise constant signal. At first, the iteration scheme is demonstrated with a simple time-dependent time fractional FP equation on finite interval with two absorbing boundaries, and then it is generalized to the more general time-dependent Smoluchowski-type time fractional Fokker-Planck equation. The numerical examples verify the efficiency and accuracy of the iteration method, and some novel dynamical phenomena including polarized motion orientations and periodic response death are discussed.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
Stimulating Students' Use of External Representations for a Distance Education Time Machine Design
ERIC Educational Resources Information Center
Baaki, John; Luo, Tian
2017-01-01
As faculty members in an instructional design and technology (IDT) program, we wanted to help our graduate students better understand and experience how designers design in the real world. We aimed to design a reflective and collaborative learning environment where we sparked students to engage in reflection, ideation, and the iterative process of…
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Picard Trajectory Approximation Iteration for Efficient Orbit Propagation
2015-07-21
Eqn (8)) for an iterative ap- proximation of eccentric anomaly, and is transformed back to . The Lambert/ Kepler time – eccentric anomaly relationship...of Kepler motion based on spinor regulari- zation, Journal fur die Reine und Angewandt Mathematik 218, 204-219, 1965. [3] Levi-Civita T., Sur la...transformed back to , ,x y z . The Lambert/ Kepler time – eccentric anomaly relationship is iterated by a Newton/Secant method to converge on the
Computational simulation of laser heat processing of materials
NASA Astrophysics Data System (ADS)
Shankar, Vijaya; Gnanamuthu, Daniel
1987-04-01
A computational model simulating the laser heat treatment of AISI 4140 steel plates with a CW CO2 laser beam has been developed on the basis of the three-dimensional, time-dependent heat equation (subject to the appropriate boundary conditions). The solution method is based on Newton iteration applied to a triple-approximate factorized form of the equation. The method is implicit and time-accurate; the maintenance of time-accuracy in the numerical formulation is noted to be critical for the simulation of finite length workpieces with a finite laser beam dwell time.
System matrix computation vs storage on GPU: A comparative study in cone beam CT.
Matenine, Dmitri; Côté, Geoffroi; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe
2018-02-01
Iterative reconstruction algorithms in computed tomography (CT) require a fast method for computing the intersection distances between the trajectories of photons and the object, also called ray tracing or system matrix computation. This work focused on the thin-ray model is aimed at comparing different system matrix handling strategies using graphical processing units (GPUs). In this work, the system matrix is modeled by thin rays intersecting a regular grid of box-shaped voxels, known to be an accurate representation of the forward projection operator in CT. However, an uncompressed system matrix exceeds the random access memory (RAM) capacities of typical computers by one order of magnitude or more. Considering the RAM limitations of GPU hardware, several system matrix handling methods were compared: full storage of a compressed system matrix, on-the-fly computation of its coefficients, and partial storage of the system matrix with partial on-the-fly computation. These methods were tested on geometries mimicking a cone beam CT (CBCT) acquisition of a human head. Execution times of three routines of interest were compared: forward projection, backprojection, and ordered-subsets convex (OSC) iteration. A fully stored system matrix yielded the shortest backprojection and OSC iteration times, with a 1.52× acceleration for OSC when compared to the on-the-fly approach. Nevertheless, the maximum problem size was bound by the available GPU RAM and geometrical symmetries. On-the-fly coefficient computation did not require symmetries and was shown to be the fastest for forward projection. It also offered reasonable execution times of about 176.4 ms per view per OSC iteration for a detector of 512 × 448 pixels and a volume of 384 3 voxels, using commodity GPU hardware. Partial system matrix storage has shown a performance similar to the on-the-fly approach, while still relying on symmetries. Partial system matrix storage was shown to yield the lowest relative performance. On-the-fly ray tracing was shown to be the most flexible method, yielding reasonable execution times. A fully stored system matrix allowed for the lowest backprojection and OSC iteration times and may be of interest for certain performance-oriented applications. © 2017 American Association of Physicists in Medicine.
Status of the ITER Cryodistribution
NASA Astrophysics Data System (ADS)
Chang, H.-S.; Vaghela, H.; Patel, P.; Rizzato, A.; Cursan, M.; Henry, D.; Forgeas, A.; Grillot, D.; Sarkar, B.; Muralidhara, S.; Das, J.; Shukla, V.; Adler, E.
2017-12-01
Since the conceptual design of the ITER Cryodistribution many modifications have been applied due to both system optimization and improved knowledge of the clients’ requirements. Process optimizations in the Cryoplant resulted in component simplifications whereas increased heat load in some of the superconducting magnet systems required more complicated process configuration but also the removal of a cold box was possible due to component arrangement standardization. Another cold box, planned for redundancy, has been removed due to the Tokamak in-Cryostat piping layout modification. In this proceeding we will summarize the present design status and component configuration of the ITER Cryodistribution with all changes implemented which aim at process optimization and simplification as well as operational reliability, stability and flexibility.
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Pitsianis, N
2015-06-15
Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less
NASA Astrophysics Data System (ADS)
Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao
2018-06-01
In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.
Modifications to the Conduit Flow Process Mode 2 for MODFLOW-2005
Reimann, T.; Birk, S.; Rehrl, C.; Shoemaker, W.B.
2012-01-01
As a result of rock dissolution processes, karst aquifers exhibit highly conductive features such as caves and conduits. Within these structures, groundwater flow can become turbulent and therefore be described by nonlinear gradient functions. Some numerical groundwater flow models explicitly account for pipe hydraulics by coupling the continuum model with a pipe network that represents the conduit system. In contrast, the Conduit Flow Process Mode 2 (CFPM2) for MODFLOW-2005 approximates turbulent flow by reducing the hydraulic conductivity within the existing linear head gradient of the MODFLOW continuum model. This approach reduces the practical as well as numerical efforts for simulating turbulence. The original formulation was for large pore aquifers where the onset of turbulence is at low Reynolds numbers (1 to 100) and not for conduits or pipes. In addition, the existing code requires multiple time steps for convergence due to iterative adjustment of the hydraulic conductivity. Modifications to the existing CFPM2 were made by implementing a generalized power function with a user-defined exponent. This allows for matching turbulence in porous media or pipes and eliminates the time steps required for iterative adjustment of hydraulic conductivity. The modified CFPM2 successfully replicated simple benchmark test problems. ?? 2011 The Author(s). Ground Water ?? 2011, National Ground Water Association.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Iterative approach as alternative to S-matrix in modal methods
NASA Astrophysics Data System (ADS)
Semenikhin, Igor; Zanuccoli, Mauro
2014-12-01
The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.
Terminal iterative learning control based station stop control of a train
NASA Astrophysics Data System (ADS)
Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao
2011-07-01
The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.
A Modularized Efficient Framework for Non-Markov Time Series Estimation
NASA Astrophysics Data System (ADS)
Schamberg, Gabriel; Ba, Demba; Coleman, Todd P.
2018-06-01
We present a compartmentalized approach to finding the maximum a-posteriori (MAP) estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g. group sparsity) and/or non-Gaussian measurement models (e.g. point process observation models used in neuroscience). Through the use of auxiliary variables in the MAP estimation problem, we show that a consensus formulation of the alternating direction method of multipliers (ADMM) enables iteratively computing separate estimates based on the likelihood and prior and subsequently "averaging" them in an appropriate sense using a Kalman smoother. As such, this can be applied to a broad class of problem settings and only requires modular adjustments when interchanging various aspects of the statistical model. Under broad log-concavity assumptions, we show that the separate estimation problems are convex optimization problems and that the iterative algorithm converges to the MAP estimate. As such, this framework can capture non-Markov latent time series models and non-Gaussian measurement models. We provide example applications involving (i) group-sparsity priors, within the context of electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement models, within the context of dynamic analyses of learning with neural spiking and behavioral observations.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
Intelligent process mapping through systematic improvement of heuristics
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
P80 SRM low torque flex-seal development - thermal and chemical modeling of molding process
NASA Astrophysics Data System (ADS)
Descamps, C.; Gautronneau, E.; Rousseau, G.; Daurat, M.
2009-09-01
The development of the flex-seal component of the P80 nozzle gave the opportunity to set up new design and manufacturing process methods. Due to the short development lead time required by VEGA program, the usual manufacturing iterative tests work flow, which is usually time consuming, had to be enhanced in order to use a more predictive approach. A newly refined rubber vulcanization description was built up and identified on laboratory samples. This chemical model was implemented in a thermal analysis code. The complete model successfully supports the manufacturing processes. These activities were conducted with the support of ESA/CNES Research & Technologies and DGA (General Delegation for Armament).
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Simultaneous and iterative weighted regression analysis of toxicity tests using a microplate reader.
Galgani, F; Cadiou, Y; Gilbert, F
1992-04-01
A system is described for determination of LC50 or IC50 by an iterative process based on data obtained from a plate reader using a marine unicellular alga as a target species. The esterase activity of Tetraselmis suesica on fluorescein diacetate as a substrate was measured using a fluorescence titerplate. Simultaneous analysis of results was performed using an iterative process adopting the sigmoid function Y = y/1 (dose of toxicant/IC50)slope for dose-response relationships. IC50 (+/- SEM) was estimated (P less than 0.05). An application with phosalone as a toxicant is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristescu, I.; Cristescu, I. R.; Doerr, L.
2008-07-15
The ITER Isotope Separation System (ISS) and Water Detritiation System (WDS) should be integrated in order to reduce potential chronic tritium emissions from the ISS. This is achieved by routing the top (protium) product from the ISS to a feed point near the bottom end of the WDS Liquid Phase Catalytic Exchange (LPCE) column. This provides an additional barrier against ISS emissions and should mitigate the memory effects due to process parameter fluctuations in the ISS. To support the research activities needed to characterize the performances of various components for WDS and ISS processes under various working conditions and configurationsmore » as needed for ITER design, an experimental facility called TRENTA representative of the ITER WDS and ISS protium separation column, has been commissioned and is in operation at TLK The experimental program on TRENTA facility is conducted to provide the necessary design data related to the relevant ITER operating modes. The operation availability and performances of ISS-WDS have impact on ITER fuel cycle subsystems with consequences on the design integration. The preliminary experimental data on TRENTA facility are presented. (authors)« less
A protection system for the JET ITER-like wall based on imaging diagnostics.
Arnoux, G; Devaux, S; Alves, D; Balboa, I; Balorin, C; Balshaw, N; Beldishevski, M; Carvalho, P; Clever, M; Cramp, S; de Pablos, J-L; de la Cal, E; Falie, D; Garcia-Sanchez, P; Felton, R; Gervaise, V; Goodyear, A; Horton, A; Jachmich, S; Huber, A; Jouve, M; Kinna, D; Kruezi, U; Manzanares, A; Martin, V; McCullen, P; Moncada, V; Obrejan, K; Patel, K; Lomas, P J; Neto, A; Rimini, F; Ruset, C; Schweer, B; Sergienko, G; Sieglin, B; Soleto, A; Stamp, M; Stephen, A; Thomas, P D; Valcárcel, D F; Williams, J; Wilson, J; Zastrow, K-D
2012-10-01
The new JET ITER-like wall (made of beryllium and tungsten) is more fragile than the former carbon fiber composite wall and requires active protection to prevent excessive heat loads on the plasma facing components (PFC). Analog CCD cameras operating in the near infrared wavelength are used to measure surface temperature of the PFCs. Region of interest (ROI) analysis is performed in real time and the maximum temperature measured in each ROI is sent to the vessel thermal map. The protection of the ITER-like wall system started in October 2011 and has already successfully led to a safe landing of the plasma when hot spots were observed on the Be main chamber PFCs. Divertor protection is more of a challenge due to dust deposits that often generate false hot spots. In this contribution we describe the camera, data capture and real time processing systems. We discuss the calibration strategy for the temperature measurements with cross validation with thermal IR cameras and bi-color pyrometers. Most importantly, we demonstrate that a protection system based on CCD cameras can work and show examples of hot spot detections that stop the plasma pulse. The limits of such a design and the associated constraints on the operations are also presented.
A protection system for the JET ITER-like wall based on imaging diagnosticsa)
NASA Astrophysics Data System (ADS)
Arnoux, G.; Devaux, S.; Alves, D.; Balboa, I.; Balorin, C.; Balshaw, N.; Beldishevski, M.; Carvalho, P.; Clever, M.; Cramp, S.; de Pablos, J.-L.; de la Cal, E.; Falie, D.; Garcia-Sanchez, P.; Felton, R.; Gervaise, V.; Goodyear, A.; Horton, A.; Jachmich, S.; Huber, A.; Jouve, M.; Kinna, D.; Kruezi, U.; Manzanares, A.; Martin, V.; McCullen, P.; Moncada, V.; Obrejan, K.; Patel, K.; Lomas, P. J.; Neto, A.; Rimini, F.; Ruset, C.; Schweer, B.; Sergienko, G.; Sieglin, B.; Soleto, A.; Stamp, M.; Stephen, A.; Thomas, P. D.; Valcárcel, D. F.; Williams, J.; Wilson, J.; Zastrow, K.-D.; JET-EFDA Contributors
2012-10-01
The new JET ITER-like wall (made of beryllium and tungsten) is more fragile than the former carbon fiber composite wall and requires active protection to prevent excessive heat loads on the plasma facing components (PFC). Analog CCD cameras operating in the near infrared wavelength are used to measure surface temperature of the PFCs. Region of interest (ROI) analysis is performed in real time and the maximum temperature measured in each ROI is sent to the vessel thermal map. The protection of the ITER-like wall system started in October 2011 and has already successfully led to a safe landing of the plasma when hot spots were observed on the Be main chamber PFCs. Divertor protection is more of a challenge due to dust deposits that often generate false hot spots. In this contribution we describe the camera, data capture and real time processing systems. We discuss the calibration strategy for the temperature measurements with cross validation with thermal IR cameras and bi-color pyrometers. Most importantly, we demonstrate that a protection system based on CCD cameras can work and show examples of hot spot detections that stop the plasma pulse. The limits of such a design and the associated constraints on the operations are also presented.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Adaptive segmentation of cerebrovascular tree in time-of-flight magnetic resonance angiography.
Hao, J T; Li, M L; Tang, F L
2008-01-01
Accurate segmentation of the human vasculature is an important prerequisite for a number of clinical procedures, such as diagnosis, image-guided neurosurgery and pre-surgical planning. In this paper, an improved statistical approach to extracting whole cerebrovascular tree in time-of-flight magnetic resonance angiography is proposed. Firstly, in order to get a more accurate segmentation result, a localized observation model is proposed instead of defining the observation model over the entire dataset. Secondly, for the binary segmentation, an improved Iterative Conditional Model (ICM) algorithm is presented to accelerate the segmentation process. The experimental results showed that the proposed algorithm can obtain more satisfactory segmentation results and save more processing time than conventional approaches, simultaneously.
US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prescott, Ryan; Hamlet, Benjamin R.
2014-12-01
During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.
ERIC Educational Resources Information Center
Apter, Brian
2014-01-01
An organisational change-process in a UK local authority (LA) over two years is examined using transcribed excerpts from three meetings. The change-process is analysed using a Foucauldian analytical tool--Iterative Learning Conversations (ILCS). An Educational Psychology Service was changed from being primarily an education-focussed…
Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao
2016-02-01
This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.
de Kroon, Marlou L A; Bulthuis, Jozien; Mulder, Wico; Schaafsma, Frederieke G; Anema, Johannes R
2016-12-01
Since the extent of sick leave and the problems of vocational school students are relatively large, we aimed to tailor a sick leave protocol at Dutch lower secondary education schools to the particular context of vocational schools. Four steps of the iterative process of Intervention Mapping (IM) to adapt this protocol were carried out: (1) performing a needs assessment and defining a program objective, (2) determining the performance and change objectives, (3) identifying theory-based methods and practical strategies and (4) developing a program plan. Interviews with students using structured questionnaires, in-depth interviews with relevant stakeholders, a literature research and, finally, a pilot implementation were carried out. A sick leave protocol was developed that was feasible and acceptable for all stakeholders. The main barriers for widespread implementation are time constraints in both monitoring and acting upon sick leave by school and youth health care. The iterative process of IM has shown its merits in the adaptation of the manual 'A quick return to school is much better' to a sick leave protocol for vocational school students.
Addressing the Barriers to Agile Development in DoD
2015-05-01
Acquisition Small, Frequent Releases Iteratively Developed Review Working Software Vice Extensive Docs Responsive to Changes...Distribution Unlimited. Case Number 15-1457’ JCIDS IT Box Model Streamlined requirements process for software >$15M JROC approves IS-ICD...Services (FAR Part 37) Product-based Pay for the time and expertise of an Agile development contractor Contract for a defined software delivery
Ocean Variability Effects on Underwater Acoustic Communications
2011-09-01
schemes for accessing wide frequency bands. Compared with OFDM schemes, the multiband MIMO transmission combined with time reversal processing...systems, or multiple- input/multiple-output ( MIMO ) systems, decision feedback equalization and interference cancellation schemes have been integrated...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 2 MIMO receiver also iterates channel estimation and symbol demodulation with
Minimizing inner product data dependencies in conjugate gradient iteration
NASA Technical Reports Server (NTRS)
Vanrosendale, J.
1983-01-01
The amount of concurrency available in conjugate gradient iteration is limited by the summations required in the inner product computations. The inner product of two vectors of length N requires time c log(N), if N or more processors are available. This paper describes an algebraic restructuring of the conjugate gradient algorithm which minimizes data dependencies due to inner product calculations. After an initial start up, the new algorithm can perform a conjugate gradient iteration in time c*log(log(N)).
Adaptive management: Chapter 1
Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Liu, Wanli
2017-01-01
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
NASA Astrophysics Data System (ADS)
Joo, Taiha
Ultrafast molecular processes in the condensed phase at room temperature are studied in the time domain by four wave mixing spectroscopy. The structure/dynamics of various quantum states can be studied by varying the time ordering of the incident fields, their polarization, their colors, etc. In one, time-resolved coherent Stokes Raman spectroscopy of benzene is investigated at room temperature. The reorientational correlation time of benzene as well as the T_2 time of the nu _1 ring-breathing mode have been measured by using two different polarization geometries. Bohr frequency difference beats have also been resolved between the nu_1 modes of ^ {12}C_6H_6 and ^{12}C_5^{13 }CH_6.. The dephasing dynamics of the nu _1 ring-breathing mode of neat benzene is studied by time-resolved coherent anti-Stokes Raman scattering. Ultrafast time resolution reveals deviation from the conventional exponential decay. The correlation time, tau _{rm c}, and the rms magnitude, Delta, of the Bohr frequency modulation are determined for the process responsible for the vibrational dephasing by Kubo dephasing function analysis. The electronic dephasing of two oxazine dyes in ethylene glycol at room temperature is investigated by photon echo experiments. It was found that at least two stochastic processes are responsible for the observed electronic dephasing. Both fast (homogeneous) and slow (inhomogeneous) dynamics are recovered using Kubo line shape analysis. Moreover, the slow dynamics is found to spectrally diffuse over the inhomogeneous distribution on the time scale around a picosecond. Time-resolved degenerate four wave mixing signal of dyes in a population measurement geometry is reported. The vibrational coherences both in the ground and excited electronic states produced strong oscillations in the signal together with the usual population decay from the excited electronic state. Absolute frequencies and their dephasing times of the vibrational modes at ~590 cm^{-1} are obtained. Finally, a new inverse transform procedure is presented that calculates the absorption band (ABS) from an experimental Raman excitation profile (REP). An iterative solution is sought for an integral Hilbert transform relation. An exact ABS is recovered regardless of the starting ABS when sufficient iterations are performed.
Noise tolerant illumination optimization applied to display devices
NASA Astrophysics Data System (ADS)
Cassarly, William J.; Irving, Bruce
2005-02-01
Display devices have historically been designed through an iterative process using numerous hardware prototypes. This process is effective but the number of iterations is limited by the time and cost to make the prototypes. In recent years, virtual prototyping using illumination software modeling tools has replaced many of the hardware prototypes. Typically, the designer specifies the design parameters, builds the software model, predicts the performance using a Monte Carlo simulation, and uses the performance results to repeat this process until an acceptable design is obtained. What is highly desired, and now possible, is to use illumination optimization to automate the design process. Illumination optimization provides the ability to explore a wider range of design options while also providing improved performance. Since Monte Carlo simulations are often used to calculate the system performance but those predictions have statistical uncertainty, the use of noise tolerant optimization algorithms is important. The use of noise tolerant illumination optimization is demonstrated by considering display device designs that extract light using 2D paint patterns as well as 3D textured surfaces. A hybrid optimization approach that combines a mesh feedback optimization with a classical optimizer is demonstrated. Displays with LED sources and cold cathode fluorescent lamps are considered.
Analyzing developmental processes on an individual level using nonstationary time series modeling.
Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E
2009-01-01
Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems.
de Molina, Claudia; Serrano, Estefania; Garcia-Blas, Javier; Carretero, Jesus; Desco, Manuel; Abella, Monica
2018-05-15
Standard cone-beam computed tomography (CBCT) involves the acquisition of at least 360 projections rotating through 360 degrees. Nevertheless, there are cases in which only a few projections can be taken in a limited angular span, such as during surgery, where rotation of the source-detector pair is limited to less than 180 degrees. Reconstruction of limited data with the conventional method proposed by Feldkamp, Davis and Kress (FDK) results in severe artifacts. Iterative methods may compensate for the lack of data by including additional prior information, although they imply a high computational burden and memory consumption. We present an accelerated implementation of an iterative method for CBCT following the Split Bregman formulation, which reduces computational time through GPU-accelerated kernels. The implementation enables the reconstruction of large volumes (>1024 3 pixels) using partitioning strategies in forward- and back-projection operations. We evaluated the algorithm on small-animal data for different scenarios with different numbers of projections, angular span, and projection size. Reconstruction time varied linearly with the number of projections and quadratically with projection size but remained almost unchanged with angular span. Forward- and back-projection operations represent 60% of the total computational burden. Efficient implementation using parallel processing and large-memory management strategies together with GPU kernels enables the use of advanced reconstruction approaches which are needed in limited-data scenarios. Our GPU implementation showed a significant time reduction (up to 48 ×) compared to a CPU-only implementation, resulting in a total reconstruction time from several hours to few minutes.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
Template-Directed Copolymerization, Random Walks along Disordered Tracks, and Fractals
NASA Astrophysics Data System (ADS)
Gaspard, Pierre
2016-12-01
In biology, template-directed copolymerization is the fundamental mechanism responsible for the synthesis of DNA, RNA, and proteins. More than 50 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of information processing in DNA replication, transcription, and translation remain poorly understood. Challenging issues are the facts that DNA or RNA sequences constitute disordered media for the motion of polymerases or ribosomes while errors occur in copying the template. Here, it is shown that these issues can be addressed and sequence heterogeneity effects can be quantitatively understood within a framework revealing universal aspects of information processing at the molecular scale. In steady growth regimes, the local velocities of polymerases or ribosomes along the template are distributed as the continuous or fractal invariant set of a so-called iterated function system, which determines the copying error probabilities. The growth may become sublinear in time with a scaling exponent that can also be deduced from the iterated function system.
Phase retrieval in annulus sector domain by non-iterative methods
NASA Astrophysics Data System (ADS)
Wang, Xiao; Mao, Heng; Zhao, Da-zun
2008-03-01
Phase retrieval could be achieved by solving the intensity transport equation (ITE) under the paraxial approximation. For the case of uniform illumination, Neumann boundary condition is involved and it makes the solving process more complicated. The primary mirror is usually designed segmented in the telescope with large aperture, and the shape of a segmented piece is often like an annulus sector. Accordingly, It is necessary to analyze the phase retrieval in the annulus sector domain. Two non-iterative methods are considered for recovering the phase. The matrix method is based on the decomposition of the solution into a series of orthogonalized polynomials, while the frequency filtering method depends on the inverse computation process of ITE. By the simulation, it is found that both methods can eliminate the effect of Neumann boundary condition, save a lot of computation time and recover the distorted phase well. The wavefront error (WFE) RMS can be less than 0.05 wavelength, even when some noise is added.
From Intent to Action: An Iterative Engineering Process
ERIC Educational Resources Information Center
Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain
2015-01-01
Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…
Ocean Simulation Model. Version 2. First Order Frontal Simulation
1991-05-01
REAL DEP(NXVS),TEMP(MXVS),SAL(MXVS),SIG(MXVS), DVF (MXVS), * DEP2(MXVS),TEMP2(MXVS),SAL2(MXVS),SIG2CMXVS),BVF2(MXVS), * DEPO(MXVS), TEMPO(MX)VS),SALO...processing parameters to desired values. Generating the Front Position Directive FRNT uses the current clock time as initial seed to call the intrinsic...potentially be very time consuming if the parameter ITER is set to a large number. Directive RES was designed to allow the user to resume the HELM
Optical Data Processing for Missile Guidance.
1983-09-30
detector outputs are a. This light intensity multiplies the signal in the AG shifted down at a clock rate 1/Tq and if successive cell and At waves leave the...lolit matrix matrix matrix multiplier -ytem. of B. We thus input these later columns ofB into the input LE) array at successive times with their...converted to frequency and time/space by the results Bj, = B.+ I on two successive iterations k and k frequency-multiplexing unit in Fig. 5 as shown in Eq
Analysis of data systems requirements for global crop production forecasting in the 1985 time frame
NASA Technical Reports Server (NTRS)
Downs, S. W.; Larsen, P. A.; Gerstner, D. A.
1978-01-01
Data systems concepts that would be needed to implement the objective of the global crop production forecasting in an orderly transition from experimental to operational status in the 1985 time frame were examined. Information needs of users were converted into data system requirements, and the influence of these requirements on the formulation of a conceptual data system was analyzed. Any potential problem areas in meeting these data system requirements were identified in an iterative process.
Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors
2008-03-13
the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1990-01-01
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.
Iterated learning and the evolution of language.
Kirby, Simon; Griffiths, Tom; Smith, Kenny
2014-10-01
Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.
Group iterative methods for the solution of two-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.
2016-06-01
Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.
NASA Astrophysics Data System (ADS)
Ying, Changsheng; Zhao, Peng; Li, Ye
2018-01-01
The intensified charge-coupled device (ICCD) is widely used in the field of low-light-level (LLL) imaging. The LLL images captured by ICCD suffer from low spatial resolution and contrast, and the target details can hardly be recognized. Super-resolution (SR) reconstruction of LLL images captured by ICCDs is a challenging issue. The dispersion in the double-proximity-focused image intensifier is the main factor that leads to a reduction in image resolution and contrast. We divide the integration time into subintervals that are short enough to get photon images, so the overlapping effect and overstacking effect of dispersion can be eliminated. We propose an SR reconstruction algorithm based on iterative projection photon localization. In the iterative process, the photon image is sliced by projection planes, and photons are screened under the constraints of regularity. The accurate position information of the incident photons in the reconstructed SR image is obtained by the weighted centroids calculation. The experimental results show that the spatial resolution and contrast of our SR image are significantly improved.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
Solution algorithm of dwell time in slope-based figuring model
NASA Astrophysics Data System (ADS)
Li, Yong; Zhou, Lin
2017-10-01
Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.
Scattering effect of submarine hull on propeller non-cavitation noise
NASA Astrophysics Data System (ADS)
Wei, Yingsan; Shen, Yang; Jin, Shuanbao; Hu, Pengfei; Lan, Rensheng; Zhuang, Shuangjiang; Liu, Dezhi
2016-05-01
This paper investigates the non-cavitation noise caused by propeller running in the wake of submarine with the consideration of scattering effect caused by submarine's hull. The computation fluid dynamics (CFD) and acoustic analogy method are adopted to predict fluctuating pressure of propeller's blade and its underwater noise radiation in time domain, respectively. An effective iteration method which is derived in the time domain from the Helmholtz integral equation is used to solve multi-frequency waves scattering due to obstacles. Moreover, to minimize time interpolation caused numerical errors, the pressure and its derivative at the sound emission time is obtained by summation of Fourier series. It is noted that the time averaging algorithm is used to achieve a convergent result if the solution oscillated in the iteration process. Meanwhile, the developed iteration method is verified and applied to predict propeller noise scattered from submarine's hull. In accordance with analysis results, it is summarized that (1) the scattering effect of hull on pressure distribution pattern especially at the frequency higher than blade passing frequency (BPF) is proved according to the contour maps of sound pressure distribution of submarine's hull and typical detecting planes. (2) The scattering effect of the hull on the total pressure is observable in noise frequency spectrum of field points, where the maximum increment is up to 3 dB at BPF, 12.5 dB at 2BPF and 20.2 dB at 3BPF. (3) The pressure scattered from hull is negligible in near-field of propeller, since the scattering effect surrounding analyzed location of propeller on submarine's stern is significantly different from the surface ship. This work shows the importance of submarine's scattering effect in evaluating the propeller non-cavitation noise.
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
Development of a public health reporting data warehouse: lessons learned.
Rizi, Seyed Ali Mussavi; Roudsari, Abdul
2013-01-01
Data warehouse projects are perceived to be risky and prone to failure due to many organizational and technical challenges. However, often iterative and lengthy processes of implementation of data warehouses at an enterprise level provide an opportunity for formative evaluation of these solutions. This paper describes lessons learned from successful development and implementation of the first phase of an enterprise data warehouse to support public health surveillance at British Columbia Centre for Disease Control. Iterative and prototyping approach to development, overcoming technical challenges of extraction and integration of data from large scale clinical and ancillary systems, a novel approach to record linkage, flexible and reusable modeling of clinical data, and securing senior management support at the right time were the main factors that contributed to the success of the data warehousing project.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.
Automated IMRT planning with regional optimization using planning scripts
Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff Z.
2013-01-01
Intensity‐modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time‐consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases. PACS numbers: 87.55.D, 87.55.de PMID:23318393
Pediatric faculty and residents’ perspectives on In-Training Evaluation Reports (ITERs)
Patel, Rikin; Drover, Anne; Chafe, Roger
2015-01-01
Background In-training evaluation reports (ITERs) are used by over 90% of postgraduate medical training programs in Canada for resident assessment. Our study examined the perspectives of faculty and residents in one pediatric program as a means to improve the ITER as an evaluation tool. Method Two separate focus groups were conducted, one with eight pediatric residents and one with nine clinical faculty within the pediatrics program of Memorial University’s Faculty of Medicine to discuss their perceptions of, and suggestions for improving, the use of ITERs. Results Residents and faculty shared many similar suggestions for improving the ITER as an evaluation tool. Both the faculty and residents emphasized the importance of written feedback, contextualizing the evaluation and timely follow-up. The biggest challenge appears to be the discrepancy in the quality of feedback sought by the residents and the faculty members’ ability to do so in a time effective manner. Others concerns related to the need for better engagement in setting rotation objectives and more direct observation by the faculty member completing the ITER. Conclusions The ITER is a useful tool in resident evaluations, but a number of issues relating to its actual use could improve the quality of feedback which residents receive. PMID:27004076
Implementing partnership-driven clinical federated electronic health record data sharing networks.
Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein
2016-09-01
Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Bruno; Carvalho, Paulo F.; Rodrigues, A.P.
The ATCA standard specifies a mandatory Shelf Manager (ShM) unit which is a key element for the system operation. It includes the Intelligent Platform Management Controller (IPMC) which monitors the system health, retrieves inventory information and controls the Field Replaceable Units (FRUs). These elements enable the intelligent health monitoring, providing high-availability and safety operation, ensuring the correct system operation. For critical systems like ones of tokamak ITER these features are mandatory to support the long pulse operation. The Nominal Device Support (NDS) was designed and developed for the ITER CODAC Core System (CCS), which will be the responsible for plantmore » Instrumentation and Control (I and C), supervising and monitoring on ITER. It generalizes the Enhanced Physics and Industrial Control System (EPICS) device support interface for Data Acquisition (DAQ) and timing devices. However the support for health management features and ATCA ShM are not yet provided. This paper presents the implementation and test of a NDS for the ATCA ShM, using the ITER Fast Plant System Controller (FPSC) prototype environment. This prototype is fully compatible with the ITER CCS and uses the EPICS Channel Access (CA) protocol as the interface with the Plant Operation Network (PON). The implemented solution running in an EPICS Input / Output Controller (IOC) provides Process Variables (PV) to the PON network with the system information. These PVs can be used for control and monitoring by all CA clients, such as EPICS user interface clients and alarm systems. The results are presented, demonstrating the fully integration and the usability of this solution. (authors)« less
Development of a GNSS water vapour tomography system using algebraic reconstruction techniques
NASA Astrophysics Data System (ADS)
Bender, Michael; Dick, Galina; Ge, Maorong; Deng, Zhiguo; Wickert, Jens; Kahle, Hans-Gert; Raabe, Armin; Tetzlaff, Gerd
2011-05-01
A GNSS water vapour tomography system developed to reconstruct spatially resolved humidity fields in the troposphere is described. The tomography system was designed to process the slant path delays of about 270 German GNSS stations in near real-time with a temporal resolution of 30 min, a horizontal resolution of 40 km and a vertical resolution of 500 m or better. After a short introduction to the GPS slant delay processing the framework of the GNSS tomography is described in detail. Different implementations of the iterative algebraic reconstruction techniques (ART) used to invert the linear inverse problem are discussed. It was found that the multiplicative techniques (MART) provide the best results with least processing time, i.e., a tomographic reconstruction of about 26,000 slant delays on a 8280 cell grid can be obtained in less than 10 min. Different iterative reconstruction techniques are compared with respect to their convergence behaviour and some numerical parameters. The inversion can be considerably stabilized by using additional non-GNSS observations and implementing various constraints. Different strategies for initialising the tomography and utilizing extra information are discussed. At last an example of a reconstructed field of the wet refractivity is presented and compared to the corresponding distribution of the integrated water vapour, an analysis of a numerical weather model (COSMO-DE) and some radiosonde profiles.
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
Application of a neural network to simulate analysis in an optimization process
NASA Technical Reports Server (NTRS)
Rogers, James L.; Lamarsh, William J., II
1992-01-01
A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.
NASA Technical Reports Server (NTRS)
Ables, Brett
2014-01-01
Multi-stage launch vehicles with solid rocket motors (SRMs) face design optimization challenges, especially when the mission scope changes frequently. Significant performance benefits can be realized if the solid rocket motors are optimized to the changing requirements. While SRMs represent a fixed performance at launch, rapid design iterations enable flexibility at design time, yielding significant performance gains. The streamlining and integration of SRM design and analysis can be achieved with improved analysis tools. While powerful and versatile, the Solid Performance Program (SPP) is not conducive to rapid design iteration. Performing a design iteration with SPP and a trajectory solver is a labor intensive process. To enable a better workflow, SPP, the Program to Optimize Simulated Trajectories (POST), and the interfaces between them have been improved and automated, and a graphical user interface (GUI) has been developed. The GUI enables real-time visual feedback of grain and nozzle design inputs, enforces parameter dependencies, removes redundancies, and simplifies manipulation of SPP and POST's numerous options. Automating the analysis also simplifies batch analyses and trade studies. Finally, the GUI provides post-processing, visualization, and comparison of results. Wrapping legacy high-fidelity analysis codes with modern software provides the improved interface necessary to enable rapid coupled SRM ballistics and vehicle trajectory analysis. Low cost trade studies demonstrate the sensitivities of flight performance metrics to propulsion characteristics. Incorporating high fidelity analysis from SPP into vehicle design reduces performance margins and improves reliability. By flying an SRM designed with the same assumptions as the rest of the vehicle, accurate comparisons can be made between competing architectures. In summary, this flexible workflow is a critical component to designing a versatile launch vehicle model that can accommodate a volatile mission scope.
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen
1992-01-01
A proven 3-D multiple-block elliptic grid generator, designed to run in 'batch mode' on a supercomputer, is improved by the creation of a modern graphical user interface (GUI) running on a workstation. The two parts are connected in real time by a network. The resultant system offers a significant speedup in the process of preparing and formatting input data and the ability to watch the grid solution converge by replotting the grid at each iteration step. The result is a reduction in user time and CPU time required to generate the grid and an enhanced understanding of the elliptic solution process. This software system, called GRAPEVINE, is described, and certain observations are made concerning the creation of such software.
NASA Astrophysics Data System (ADS)
Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.
2012-04-01
The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.
NASA Astrophysics Data System (ADS)
Sanz, D.; Ruiz, M.; Castro, R.; Vega, J.; Afif, M.; Monroe, M.; Simrock, S.; Debelle, T.; Marawar, R.; Glass, B.
2016-04-01
To aid in assessing the functional performance of ITER, Fission Chambers (FC) based on the neutron diagnostic use case deliver timestamped measurements of neutron source strength and fusion power. To demonstrate the Plant System Instrumentation & Control (I&C) required for such a system, ITER Organization (IO) has developed a neutron diagnostics use case that fully complies with guidelines presented in the Plant Control Design Handbook (PCDH). The implementation presented in this paper has been developed on the PXI Express (PXIe) platform using products from the ITER catalog of standard I&C hardware for fast controllers. Using FlexRIO technology, detector signals are acquired at 125 MS/s, while filtering, decimation, and three methods of neutron counting are performed in real-time via the onboard Field Programmable Gate Array (FPGA). Measurement results are reported every 1 ms through Experimental Physics and Industrial Control System (EPICS) Channel Access (CA), with real-time timestamps derived from the ITER Timing Communication Network (TCN) based on IEEE 1588-2008. Furthermore, in accordance with ITER specifications for CODAC Core System (CCS) application development, the software responsible for the management, configuration, and monitoring of system devices has been developed in compliance with a new EPICS module called Nominal Device Support (NDS) and RIO/FlexRIO design methodology.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Can SNOMED CT be squeezed without losing its shape?
López-García, Pablo; Schulz, Stefan
2016-09-21
In biomedical applications where the size and complexity of SNOMED CT become problematic, using a smaller subset that can act as a reasonable substitute is usually preferred. In a special class of use cases-like ontology-based quality assurance, or when performing scaling experiments for real-time performance-it is essential that modules show a similar shape than SNOMED CT in terms of concept distribution per sub-hierarchy. Exactly how to extract such balanced modules remains unclear, as most previous work on ontology modularization has focused on other problems. In this study, we investigate to what extent extracting balanced modules that preserve the original shape of SNOMED CT is possible, by presenting and evaluating an iterative algorithm. We used a graph-traversal modularization approach based on an input signature. To conform to our definition of a balanced module, we implemented an iterative algorithm that carefully bootstraped and dynamically adjusted the signature at each step. We measured the error for each sub-hierarchy and defined convergence as a residual sum of squares <1. Using 2000 concepts as an initial signature, our algorithm converged after seven iterations and extracted a module 4.7 % the size of SNOMED CT. Seven sub-hierarhies were either over or under-represented within a range of 1-8 %. Our study shows that balanced modules from large terminologies can be extracted using ontology graph-traversal modularization techniques under certain conditions: that the process is repeated a number of times, the input signature is dynamically adjusted in each iteration, and a moderate under/over-representation of some hierarchies is tolerated. In the case of SNOMED CT, our results conclusively show that it can be squeezed to less than 5 % of its size without any sub-hierarchy losing its shape more than 8 %, which is likely sufficient in most use cases.
Nguyen, Van-Giang; Lee, Soo-Jin
2016-07-01
Iterative reconstruction from Compton scattered data is known to be computationally more challenging than that from conventional line-projection based emission data in that the gamma rays that undergo Compton scattering are modeled as conic projections rather than line projections. In conventional tomographic reconstruction, to parallelize the projection and backprojection operations using the graphics processing unit (GPU), approximated methods that use an unmatched pair of ray-tracing forward projector and voxel-driven backprojector have been widely used. In this work, we propose a new GPU-accelerated method for Compton camera reconstruction which is more accurate by using exactly matched pair of projector and backprojector. To calculate conic forward projection, we first sample the cone surface into conic rays and accumulate the intersecting chord lengths of the conic rays passing through voxels using a fast ray-tracing method (RTM). For conic backprojection, to obtain the true adjoint of the conic forward projection, while retaining the computational efficiency of the GPU, we use a voxel-driven RTM which is essentially the same as the standard RTM used for the conic forward projector. Our simulation results show that, while the new method is about 3 times slower than the approximated method, it is still about 16 times faster than the CPU-based method without any loss of accuracy. The net conclusion is that our proposed method is guaranteed to retain the reconstruction accuracy regardless of the number of iterations by providing a perfectly matched projector-backprojector pair, which makes iterative reconstruction methods for Compton imaging faster and more accurate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Conservative tightly-coupled simulations of stochastic multiscale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2016-05-15
Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less
Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li
2018-07-01
Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Formation and termination of runaway beams in ITER disruptions
NASA Astrophysics Data System (ADS)
Martín-Solís, J. R.; Loarte, A.; Lehnen, M.
2017-06-01
A self-consistent analysis of the relevant physics regarding the formation and termination of runaway beams during mitigated disruptions by Ar and Ne injection is presented for selected ITER scenarios with the aim of improving our understanding of the physics underlying the runaway heat loads onto the plasma facing components (PFCs) and identifying open issues for developing and accessing disruption mitigation schemes for ITER. This is carried out by means of simplified models, but still retaining sufficient details of the key physical processes, including: (a) the expected dominant runaway generation mechanisms (avalanche and primary runaway seeds: Dreicer and hot tail runaway generation, tritium decay and Compton scattering of γ rays emitted by the activated wall), (b) effects associated with the plasma and runaway current density profile shape, and (c) corrections to the runaway dynamics to account for the collisions of the runaways with the partially stripped impurity ions, which are found to have strong effects leading to low runaway current generation and low energy conversion during current termination for mitigated disruptions by noble gas injection (particularly for Ne injection) for the shortest current quench times compatible with acceptable forces on the ITER vessel and in-vessel components ({τ\\text{res}}∼ 22~\\text{ms} ). For the case of long current quench times ({τ\\text{res}}∼ 66~\\text{ms} ), runaway beams up to ∼10 MA can be generated during the disruption current quench and, if the termination of the runaway current is slow enough, the generation of runaways by the avalanche mechanism can play an important role, increasing substantially the energy deposited by the runaways onto the PFCs up to a few hundreds of MJs. Mixed impurity (Ar or Ne) plus deuterium injection proves to be effective in controlling the formation of the runaway current during the current quench, even for the longest current quench times, as well as in decreasing the energy deposited on the runaway electrons during current termination.
Active control for stabilization of neoclassical tearing modesa)
NASA Astrophysics Data System (ADS)
Humphreys, D. A.; Ferron, J. R.; La Haye, R. J.; Luce, T. C.; Petty, C. C.; Prater, R.; Welander, A. S.
2006-05-01
This work describes active control algorithms used by DIII-D [J. L. Luxon, Nucl. Fusion 42, 614 (2002)] to stabilize and maintain suppression of 3/2 or 2/1 neoclassical tearing modes (NTMs) by application of electron cyclotron current drive (ECCD) at the rational q surface. The DIII-D NTM control system can determine the correct q-surface/ECCD alignment and stabilize existing modes within 100-500ms of activation, or prevent mode growth with preemptive application of ECCD, in both cases enabling stable operation at normalized beta values above 3.5. Because NTMs can limit performance or cause plasma-terminating disruptions in tokamaks, their stabilization is essential to the high performance operation of ITER [R. Aymar et al., ITER Joint Central Team, ITER Home Teams, Nucl. Fusion 41, 1301 (2001)]. The DIII-D NTM control system has demonstrated many elements of an eventual ITER solution, including general algorithms for robust detection of q-surface/ECCD alignment and for real-time maintenance of alignment following the disappearance of the mode. This latter capability, unique to DIII-D, is based on real-time reconstruction of q-surface geometry by a Grad-Shafranov solver using external magnetics and internal motional Stark effect measurements. Alignment is achieved by varying either the plasma major radius (and the rational q surface) or the toroidal field (and the deposition location). The requirement to achieve and maintain q-surface/ECCD alignment with accuracy on the order of 1cm is routinely met by the DIII-D Plasma Control System and these algorithms. We discuss the integrated plasma control design process used for developing these and other general control algorithms, which includes physics-based modeling and testing of the algorithm implementation against simulations of actuator and plasma responses. This systematic design/test method and modeling environment enabled successful mode suppression by the NTM control system upon first-time use in an experimental discharge.
Iterative development of visual control systems in a research vivarium.
Bassuk, James A; Washington, Ida M
2014-01-01
The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children's (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart's Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System.
Iterative Development of Visual Control Systems in a Research Vivarium
Bassuk, James A.; Washington, Ida M.
2014-01-01
The goal of this study was to test the hypothesis that reintroduction of Continuous Performance Improvement (CPI) methodology, a lean approach to management at Seattle Children’s (Hospital, Research Institute, Foundation), would facilitate engagement of vivarium employees in the development and sustainment of a daily management system and a work-in-process board. Such engagement was implemented through reintroduction of aspects of the Toyota Production System. Iterations of a Work-In-Process Board were generated using Shewhart’s Plan-Do-Check-Act process improvement cycle. Specific attention was given to the importance of detecting and preventing errors through assessment of the following 5 levels of quality: Level 1, customer inspects; Level 2, company inspects; Level 3, work unit inspects; Level 4, self-inspection; Level 5, mistake proofing. A functioning iteration of a Mouse Cage Work-In-Process Board was eventually established using electronic data entry, an improvement that increased the quality level from 1 to 3 while reducing wasteful steps, handoffs and queues. A visual workplace was realized via a daily management system that included a Work-In-Process Board, a problem solving board and two Heijunka boards. One Heijunka board tracked cage changing as a function of a biological kanban, which was validated via ammonia levels. A 17% reduction in cage changing frequency provided vivarium staff with additional time to support Institute researchers in their mutual goal of advancing cures for pediatric diseases. Cage washing metrics demonstrated an improvement in the flow continuum in which a traditional batch and queue push system was replaced with a supermarket-type pull system. Staff engagement during the improvement process was challenging and is discussed. The collective data indicate that the hypothesis was found to be true. The reintroduction of CPI into daily work in the vivarium is consistent with the 4P Model of the Toyota Way and selected Principles that guide implementation of the Toyota Production System. PMID:24736460
MO-B-BRB-02: Maintain the Quality of Treatment Planning for Time-Constraint Cases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.
The radiotherapy treatment planning process has evolved over the years with innovations in treatment planning, treatment delivery and imaging systems. Treatment modality and simulation technologies are also rapidly improving and affecting the planning process. For example, Image-guided-radiation-therapy has been widely adopted for patient setup, leading to margin reduction and isocenter repositioning after simulation. Stereotactic Body radiation therapy (SBRT) and Radiosurgery (SRS) have gradually become the standard of care for many treatment sites, which demand a higher throughput for the treatment plans even if the number of treatments per day remains the same. Finally, simulation, planning and treatment are traditionally sequentialmore » events. However, with emerging adaptive radiotherapy, they are becoming more tightly intertwined, leading to iterative processes. Enhanced efficiency of planning is therefore becoming more critical and poses serious challenge to the treatment planning process; Lean Six Sigma approaches are being utilized increasingly to balance the competing needs for speed and quality. In this symposium we will discuss the treatment planning process and illustrate effective techniques for managing workflow. Topics will include: Planning techniques: (a) beam placement, (b) dose optimization, (c) plan evaluation (d) export to RVS. Planning workflow: (a) import images, (b) Image fusion, (c) contouring, (d) plan approval (e) plan check (f) chart check, (g) sequential and iterative process Influence of upstream and downstream operations: (a) simulation, (b) immobilization, (c) motion management, (d) QA, (e) IGRT, (f) Treatment delivery, (g) SBRT/SRS (h) adaptive planning Reduction of delay between planning steps with Lean systems due to (a) communication, (b) limited resource, (b) contour, (c) plan approval, (d) treatment. Optimizing planning processes: (a) contour validation (b) consistent planning protocol, (c) protocol/template sharing, (d) semi-automatic plan evaluation, (e) quality checklist for error prevention, (f) iterative process, (g) balance of speed and quality Learning Objectives: Gain familiarity with the workflow of modern treatment planning process. Understand the scope and challenges of managing modern treatment planning processes. Gain familiarity with Lean Six Sigma approaches and their implementation in the treatment planning workflow.« less
Adaptive Dynamic Programming for Discrete-Time Zero-Sum Games.
Wei, Qinglai; Liu, Derong; Lin, Qiao; Song, Ruizhuo
2018-04-01
In this paper, a novel adaptive dynamic programming (ADP) algorithm, called "iterative zero-sum ADP algorithm," is developed to solve infinite-horizon discrete-time two-player zero-sum games of nonlinear systems. The present iterative zero-sum ADP algorithm permits arbitrary positive semidefinite functions to initialize the upper and lower iterations. A novel convergence analysis is developed to guarantee the upper and lower iterative value functions to converge to the upper and lower optimums, respectively. When the saddle-point equilibrium exists, it is emphasized that both the upper and lower iterative value functions are proved to converge to the optimal solution of the zero-sum game, where the existence criteria of the saddle-point equilibrium are not required. If the saddle-point equilibrium does not exist, the upper and lower optimal performance index functions are obtained, respectively, where the upper and lower performance index functions are proved to be not equivalent. Finally, simulation results and comparisons are shown to illustrate the performance of the present method.
Perceived Maternal Behavioral Control, Infant Behavior, and Milk Supply: A Qualitative Study.
Peacock-Chambers, Elizabeth; Dicks, Kaitlin; Sarathy, Leela; Brown, Allison A; Boynton-Jarrett, Renée
Disparities persist in breastfeeding exclusivity and duration despite increases in breastfeeding initiation. The objective of this study was to examine factors that influence maternal decision making surrounding infant feeding practices over time in a diverse inner-city population. We conducted a prospective qualitative study with 20 mothers recruited from 2 urban primary care clinics. Participants completed open-ended interviews and demographic questionnaires in English or Spanish administered at approximately 2 weeks and 6 months postpartum. Transcripts were analyzed using a combined technique of inductive (data-driven) and deductive (theory-driven, based on the Theory of Planned Behavior) thematic analysis using 3 independent coders and iterative discussion to reach consensus. All women initiated breastfeeding, and 65% reported perceived insufficient milk (PIM). An association between PIM and behavioral control emerged as the overarching theme impacting early breastfeeding cessation and evolved over time. Early postpartum, PIM evoked maternal distress-strong emotional responses to infant crying and need to control infant behaviors. Later, mothers accepted a perceived lack of control over milk supply with minimal distress or as a natural process. Decisions to stop breastfeeding occurred through an iterative process, informed by trials of various strategies and observations of subsequent changes in infant behavior, strongly influenced by competing psychosocial demands. Infant feeding decisions evolve over time and are influenced by perceptions of control over infant behavior and milk supply. Tailored anticipatory guidance is needed to provide time-sensitive strategies to cope with challenging infant behaviors and promote maternal agency over breastfeeding in low-income populations.
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
Region of interest processing for iterative reconstruction in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.
2015-03-01
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Noise removal in extended depth of field microscope images through nonlinear signal processing.
Zahreddine, Ramzi N; Cormack, Robert H; Cogswell, Carol J
2013-04-01
Extended depth of field (EDF) microscopy, achieved through computational optics, allows for real-time 3D imaging of live cell dynamics. EDF is achieved through a combination of point spread function engineering and digital image processing. A linear Wiener filter has been conventionally used to deconvolve the image, but it suffers from high frequency noise amplification and processing artifacts. A nonlinear processing scheme is proposed which extends the depth of field while minimizing background noise. The nonlinear filter is generated via a training algorithm and an iterative optimizer. Biological microscope images processed with the nonlinear filter show a significant improvement in image quality and signal-to-noise ratio over the conventional linear filter.
NASA Astrophysics Data System (ADS)
Jung, Richard; Ehlers, Manfred
2016-10-01
The spectral features of intertidal sediments are all influenced by the same biophysical properties, such as water, salinity, grain size or vegetation and therefore they are hard to separate by using only multispectral sensors. This could be shown by a previous study of Jung et al. (2015). A more detailed analysis of their characteristic spectral feature has to be carried out to understand the differences and similarities. Spectrometry data (i.e., hyperspectral sensors), for instance, have the opportunity to measure the reflection of the landscape as a continuous spectral pattern for each pixel of an image built from dozen to hundreds of narrow spectral bands. This reveals a high potential to measure unique spectral responses of different ecological conditions (Hennig et al., 2007). In this context, this study uses spectrometric datasets to distinguish between 14 different sediment classes obtained from a study area in the German Wadden Sea. A new feature selection method is proposed (Jeffries-Matusita distance bases feature selection; JMDFS), which uses the Euclidean distance to eliminate the wavelengths with the most similar reflectance values in an iterative process. Subsequent to each iteration, the separation capability is estimated by the Jeffries-Matusita distance (JMD). Two classes can be separated if the JMD is greater than 1.9 and if less than four wavelengths remain, no separation can be assumed. The results of the JMDFS are compared with a state-of-the-art feature selection method called ReliefF. Both methods showed the ability to improve the separation by achieving overall accuracies greater than 82%. The accuracies are 4%-13% better than the results with all wavelengths applied. The number of remaining wavelengths is very diverse and ranges from 14 to 213 of 703. The advantage of JMDFS compared with ReliefF is clearly the processing time. ReliefF needs 30 min for one temporary result. It is necessary to repeat the process several times and to average all temporary results to achieve a final result. In this study 50 iterations were carried out, which makes four days of processing. In contrast, JMDFS needs only 30 min for a final result.
ERIC Educational Resources Information Center
Mozelius, Peter; Hettiarachchi, Enosha
2012-01-01
This paper describes the iterative development process of a Learning Object Repository (LOR), named eNOSHA. Discussions on a project for a LOR started at the e-Learning Centre (eLC) at The University of Colombo, School of Computing (UCSC) in 2007. The eLC has during the last decade been developing learning content for a nationwide e-learning…
Too Little Too Soon? Modeling the Risks of Spiral Development
2007-04-30
270 315 360 405 450 495 540 585 630 675 720 765 810 855 900 Time (Week) Work started and active PhIt [Requirements,Iter1] : JavelinCalibration work...packages1 1 1 Work started and active PhIt [Technology,Iter1] : JavelinCalibration work packages2 2 2 Work started and active PhIt [Design,Iter1...JavelinCalibration work packages3 3 3 3 Work started and active PhIt [Manufacturing,Iter1] : JavelinCalibration work packages4 4 Work started and active PhIt
NASA Technical Reports Server (NTRS)
Koenig, Herbert A.; Chan, Kwai S.; Cassenti, Brice N.; Weber, Richard
1988-01-01
A unified numerical method for the integration of stiff time dependent constitutive equations is presented. The solution process is directly applied to a constitutive model proposed by Bodner. The theory confronts time dependent inelastic behavior coupled with both isotropic hardening and directional hardening behaviors. Predicted stress-strain responses from this model are compared to experimental data from cyclic tests on uniaxial specimens. An algorithm is developed for the efficient integration of the Bodner flow equation. A comparison is made with the Euler integration method. An analysis of computational time is presented for the three algorithms.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
NASA Astrophysics Data System (ADS)
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
The Physics Basis of ITER Confinement
NASA Astrophysics Data System (ADS)
Wagner, F.
2009-02-01
ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode—the preferred confinement regime of ITER.
Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)
2008-08-01
development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as
Evolutionary Software Development (Developpement evolutionnaire de logiciels)
2008-08-01
development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as
Bragg x-ray survey spectrometer for ITER.
Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S
2012-10-01
Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.
NASA Astrophysics Data System (ADS)
Malkawi, M. I.; Hawarey, M. M.
2012-04-01
Ever since the advent of the new era in presenting taught material in Electronic Form, international bodies, academic institutions, public sectors, as well as specialized entities in the private sector, globally, have all persevered to exploit the power of Distance Learning and e-Learning to disseminate the knowledge in Science and Art using the ubiquitous World Wide Web and its supporting Internet and Internetworking. Many Science & Education-sponsoring bodies, like UNESCO, the European Community, and the World Bank have been keen at funding multinational Distance Learning projects, many of which were directed at an educated audience in certain technical areas. Many countries around the Middle East have found a number of interested European partners to launch funding requests, and were generally successful in their solicitation efforts for the needed funds from these funding bodies. Albeit their intricacies in generating a wealth of knowledge in electronic form, many of the e-Learning schemas developed thus far, have only pursued their goals in the most conventional of ways; In essence, there had been little innovation introduced to gain anything, if any, above traditional classroom lecturing, other than, of course, the gained advantage of the simultaneous online testing and evaluation of the learned material by the examinees. In a sincere effort to change the way in which people look at the merits of e-Learning, and seek the most out of it, we shall propose a novel approach aimed at optimizing the learning outcomes of presented materials. In this paper we propose what shall henceforth be called as Iterative e-Learning. In Iterative e-Learning, as the name implies, a student uses some form of electronic media to access course material in a specific subject. At the end of each phase (Section, Chapter, Session, etc.) on a specific topic, the student is assessed online of how much he/she would have achieved before he/she would move on. If the student fails, due to some delinquency on a particular topic, the online process of e-Learning would take the student at some more detailed and deeper level on the subject matter where he/she had failed; once the student bridges the gap, to this end, then the ongoing e-Learning process would carry him/her further up the next level of the subject matter he/she is pursuing. This process is carried on at all levels of learning: section, chapter, and course level. A student may not progress to the next course level before he/she would pass the entire course at 80% or more. If in the process of repeating some section, chapter, or a whole course, then the student shall be required to score a higher percentage than the mere 80% he was required to attain the first time around; say 5% more per iteration he/she makes. Here, students going through Iterative e-Learning shall be allowed to move on to the next level of learning sooner than others if the time that takes them to learn a particular topic is shorter than would normally require an average student to expend, provided, of course, they make it through all the required assessment phases. Unlike the traditional ways of classroom or online lecturing, a student going through Iterative e-Learning is expected to achieve a quality of learning never before achieved via standard pedagogical methodologies. With Iterative e-Learning, it is expected that poorly accredited academic institutions will be able, for the first time, to produce the quality of graduates who are more capable of competing for highly paying jobs globally, and to be of the quality of contributing in more industry-supported economies.
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
Mixed Material Plasma-Surface Interactions in ITER: Recent Results from the PISCES Group
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tynan, George R.; Baldwin, Matthew; Doerner, Russell
This paper summarizes recent PISCES studies focused on the effects associated with mixed species plasmas that are similar in composition to what one might expect in ITER. Formation of nanometer scale whiskerlike features occurs in W surfaces exposed to pure He and mixed D/He plasmas and appears to be associated with the formation of He nanometer-scaled bubbles in the W surface. Studies of Be-W alloy formation in Be-seeded D plasmas suggest that this process may be important in ITER all metal wall operational scenarios. Studies also suggest that BeD formation via chemical sputtering of Be walls may be an importantmore » first wall erosion mechanism. D retention in ITER mixed materials has also been studied. The D release behavior from beryllium co-deposits does not appear to be a diffusion dominated process, but instead is consistent with thermal release from a number of variable trapping energy sites. As a result, the amount of tritium remaining in codeposits in ITER after baking will be determined by the maximum temperature achieved, rather than by the duration of the baking cycle.« less
Six sigma: process of understanding the control and capability of ranitidine hydrochloride tablet.
Chabukswar, Ar; Jagdale, Sc; Kuchekar, Bs; Joshi, Vd; Deshmukh, Gr; Kothawade, Hs; Kuckekar, Ab; Lokhande, Pd
2011-01-01
The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product.
Six Sigma: Process of Understanding the Control and Capability of Ranitidine Hydrochloride Tablet
Chabukswar, AR; Jagdale, SC; Kuchekar, BS; Joshi, VD; Deshmukh, GR; Kothawade, HS; Kuckekar, AB; Lokhande, PD
2011-01-01
The process of understanding the control and capability (PUCC) is an iterative closed loop process for continuous improvement. It covers the DMAIC toolkit in its three phases. PUCC is an iterative approach that rotates between the three pillars of the process of understanding, process control, and process capability, with each iteration resulting in a more capable and robust process. It is rightly said that being at the top is a marathon and not a sprint. The objective of the six sigma study of Ranitidine hydrochloride tablets is to achieve perfection in tablet manufacturing by reviewing the present robust manufacturing process, to find out ways to improve and modify the process, which will yield tablets that are defect-free and will give more customer satisfaction. The application of six sigma led to an improved process capability, due to the improved sigma level of the process from 1.5 to 4, a higher yield, due to reduced variation and reduction of thick tablets, reduction in packing line stoppages, reduction in re-work by 50%, a more standardized process, with smooth flow and change in coating suspension reconstitution level (8%w/w), a huge cost reduction of approximately Rs.90 to 95 lakhs per annum, an improved overall efficiency by 30% approximately, and improved overall quality of the product. PMID:21607050
NASA Astrophysics Data System (ADS)
Song, Yongchen; Hao, Min; Zhao, Yuechao; Zhang, Liang
2014-12-01
In this study, the dual-chamber pressure decay method and magnetic resonance imaging (MRI) were used to dynamically visualize the gas diffusion process in liquid-saturated porous media, and the relationship of concentration-distance for gas diffusing into liquid-saturated porous media at different times were obtained by MR images quantitative analysis. A non-iterative finite volume method was successfully applied to calculate the local gas diffusion coefficient in liquid-saturated porous media. The results agreed very well with the conventional pressure decay method, thus it demonstrates that the method was feasible of determining the local diffusion coefficient of gas in liquid-saturated porous media at different times during diffusion process.
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
Timing Recovery Strategies in Magnetic Recording Systems
NASA Astrophysics Data System (ADS)
Kovintavewat, Piya
At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Wang, Fang; Ouyang, Guang; Zhou, Changsong; Wang, Suiping
2015-01-01
A number of studies have explored the time course of Chinese semantic and syntactic processing. However, whether syntactic processing occurs earlier than semantics during Chinese sentence reading is still under debate. To further explore this issue, an event-related potentials (ERPs) experiment was conducted on 21 native Chinese speakers who read individually-presented Chinese simple sentences (NP1+VP+NP2) word-by-word for comprehension and made semantic plausibility judgments. The transitivity of the verbs was manipulated to form three types of stimuli: congruent sentences (CON), sentences with a semantically violated NP2 following a transitive verb (semantic violation, SEM), and sentences with a semantically violated NP2 following an intransitive verb (combined semantic and syntactic violation, SEM+SYN). The ERPs evoked from the target NP2 were analyzed by using the Residue Iteration Decomposition (RIDE) method to reconstruct the ERP waveform blurred by trial-to-trial variability, as well as by using the conventional ERP method based on stimulus-locked averaging. The conventional ERP analysis showed that, compared with the critical words in CON, those in SEM and SEM+SYN elicited an N400-P600 biphasic pattern. The N400 effects in both violation conditions were of similar size and distribution, but the P600 in SEM+SYN was bigger than that in SEM. Compared with the conventional ERP analysis, RIDE analysis revealed a larger N400 effect and an earlier P600 effect (in the time window of 500-800 ms instead of 570-810ms). Overall, the combination of conventional ERP analysis and the RIDE method for compensating for trial-to-trial variability confirmed the non-significant difference between SEM and SEM+SYN in the earlier N400 time window. Converging with previous findings on other Chinese structures, the current study provides further precise evidence that syntactic processing in Chinese does not occur earlier than semantic processing.
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Real-time photo-magnetic imaging.
Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin
2016-10-01
We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.
A Fast, Minimalist Search Tool for Remote Sensing Data
NASA Astrophysics Data System (ADS)
Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.
2005-12-01
We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.
Reduction of asymmetric wall force in ITER disruptions with fast current quench
NASA Astrophysics Data System (ADS)
Strauss, H.
2018-02-01
One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.
Advanced density profile reflectometry; the state-of-the-art and measurement prospects for ITER
NASA Astrophysics Data System (ADS)
Doyle, E. J.
2006-10-01
Dramatic progress in millimeter-wave technology has allowed the realization of a key goal for ITER diagnostics, the routine measurement of the plasma density profile from millimeter-wave radar (reflectometry) measurements. In reflectometry, the measured round-trip group delay of a probe beam reflected from a plasma cutoff is used to infer the density distribution in the plasma. Reflectometer systems implemented by UCLA on a number of devices employ frequency-modulated continuous-wave (FM-CW), ultrawide-bandwidth, high-resolution radar systems. One such system on DIII-D has routinely demonstrated measurements of the density profile over a range of electron density of 0-6.4x10^19,m-3, with ˜25 μs time and ˜4 mm radial resolution, meeting key ITER requirements. This progress in performance was made possible by multiple advances in the areas of millimeter-wave technology, novel measurement techniques, and improved understanding, including: (i) fast sweep, solid-state, wide bandwidth sources and power amplifiers, (ii) dual polarization measurements to expand the density range, (iii) adaptive radar-based data analysis with parallel processing on a Unix cluster, (iv) high memory depth data acquisition, and (v) advances in full wave code modeling. The benefits of advanced system performance will be illustrated using measurements from a wide range of phenomena, including ELM and fast-ion driven mode dynamics, L-H transition studies and plasma-wall interaction. The measurement capabilities demonstrated by these systems provide a design basis for the development of the main ITER profile reflectometer system. This talk will explore the extent to which these reflectometer system designs, results and experience can be translated to ITER, and will identify what new studies and experimental tests are essential.
Data and Workflow Management Challenges in Global Adjoint Tomography
NASA Astrophysics Data System (ADS)
Lei, W.; Ruan, Y.; Smith, J. A.; Modrak, R. T.; Orsvuran, R.; Krischer, L.; Chen, Y.; Balasubramanian, V.; Hill, J.; Turilli, M.; Bozdag, E.; Lefebvre, M. P.; Jha, S.; Tromp, J.
2017-12-01
It is crucial to take the complete physics of wave propagation into account in seismic tomography to further improve the resolution of tomographic images. The adjoint method is an efficient way of incorporating 3D wave simulations in seismic tomography. However, global adjoint tomography is computationally expensive, requiring thousands of wavefield simulations and massive data processing. Through our collaboration with the Oak Ridge National Laboratory (ORNL) computing group and an allocation on Titan, ORNL's GPU-accelerated supercomputer, we are now performing our global inversions by assimilating waveform data from over 1,000 earthquakes. The first challenge we encountered is dealing with the sheer amount of seismic data. Data processing based on conventional data formats and processing tools (such as SAC), which are not designed for parallel systems, becomes our major bottleneck. To facilitate the data processing procedures, we designed the Adaptive Seismic Data Format (ASDF) and developed a set of Python-based processing tools to replace legacy FORTRAN-based software. These tools greatly enhance reproducibility and accountability while taking full advantage of highly parallel system and showing superior scaling on modern computational platforms. The second challenge is that the data processing workflow contains more than 10 sub-procedures, making it delicate to handle and prone to human mistakes. To reduce human intervention as much as possible, we are developing a framework specifically designed for seismic inversion based on the state-of-the art workflow management research, specifically the Ensemble Toolkit (EnTK), in collaboration with the RADICAL team from Rutgers University. Using the initial developments of the EnTK, we are able to utilize the full computing power of the data processing cluster RHEA at ORNL while keeping human interaction to a minimum and greatly reducing the data processing time. Thanks to all the improvements, we are now able to perform iterations fast enough on more than a 1,000 earthquakes dataset. Starting from model GLAD-M15 (Bozdag et al., 2016), an elastic 3D model with a transversely isotropic upper mantle, we have successfully performed 5 iterations. Our goal is to finish 10 iterations, i.e., generating GLAD M25* by the end of this year.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Sojourning with the Homogeneous Poisson Process.
Liu, Piaomu; Peña, Edsel A
2016-01-01
In this pedagogical article, distributional properties, some surprising, pertaining to the homogeneous Poisson process (HPP), when observed over a possibly random window, are presented. Properties of the gap-time that covered the termination time and the correlations among gap-times of the observed events are obtained. Inference procedures, such as estimation and model validation, based on event occurrence data over the observation window, are also presented. We envision that through the results in this paper, a better appreciation of the subtleties involved in the modeling and analysis of recurrent events data will ensue, since the HPP is arguably one of the simplest among recurrent event models. In addition, the use of the theorem of total probability, Bayes theorem, the iterated rules of expectation, variance and covariance, and the renewal equation could be illustrative when teaching distribution theory, mathematical statistics, and stochastic processes at both the undergraduate and graduate levels. This article is targeted towards both instructors and students.
Real-time stereo matching using orthogonal reliability-based dynamic programming.
Gong, Minglun; Yang, Yee-Hong
2007-03-01
A novel algorithm is presented in this paper for estimating reliable stereo matches in real time. Based on the dynamic programming-based technique we previously proposed, the new algorithm can generate semi-dense disparity maps using as few as two dynamic programming passes. The iterative best path tracing process used in traditional dynamic programming is replaced by a local minimum searching process, making the algorithm suitable for parallel execution. Most computations are implemented on programmable graphics hardware, which improves the processing speed and makes real-time estimation possible. The experiments on the four new Middlebury stereo datasets show that, on an ATI Radeon X800 card, the presented algorithm can produce reliable matches for 60% approximately 80% of pixels at the rate of 10 approximately 20 frames per second. If needed, the algorithm can be configured for generating full density disparity maps.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
Designing for Temporal Awareness: The Role of Temporality in Time-Critical Medical Teamwork
Kusunoki, Diana S.; Sarcevic, Aleksandra
2016-01-01
This paper describes the role of temporal information in emergency medical teamwork and how time-based features can be designed to support the temporal awareness of clinicians in this fast-paced and dynamic environment. Engagement in iterative design activities with clinicians over the course of two years revealed a strong need for time-based features and mechanisms, including timestamps for tasks based on absolute time and automatic stopclocks measuring time by counting up since task performance. We describe in detail the aspects of temporal awareness central to clinicians’ awareness needs and then provide examples of how we addressed these needs through the design of a shared information display. As an outcome of this process, we define four types of time representation techniques to facilitate the design of time-based features: (1) timestamps based on absolute time, (2) timestamps relative to the process start time, (3) time since task performance, and (4) time until the next required task. PMID:27478880
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
NASA Astrophysics Data System (ADS)
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
2009-12-01
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.
Rao, Ying; Wang, Yanghua
2017-08-17
In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.
Representation and alignment of sung queries for music information retrieval
NASA Astrophysics Data System (ADS)
Adams, Norman H.; Wakefield, Gregory H.
2005-09-01
The pursuit of robust and rapid query-by-humming systems, which search melodic databases using sung queries, is a common theme in music information retrieval. The retrieval aspect of this database problem has received considerable attention, whereas the front-end processing of sung queries and the data structure to represent melodies has been based on musical intuition and historical momentum. The present work explores three time series representations for sung queries: a sequence of notes, a ``smooth'' pitch contour, and a sequence of pitch histograms. The performance of the three representations is compared using a collection of naturally sung queries. It is found that the most robust performance is achieved by the representation with highest dimension, the smooth pitch contour, but that this representation presents a formidable computational burden. For all three representations, it is necessary to align the query and target in order to achieve robust performance. The computational cost of the alignment is quadratic, hence it is necessary to keep the dimension small for rapid retrieval. Accordingly, iterative deepening is employed to achieve both robust performance and rapid retrieval. Finally, the conventional iterative framework is expanded to adapt the alignment constraints based on previous iterations, further expediting retrieval without degrading performance.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
ERIC Educational Resources Information Center
Camp, Dane R.
1991-01-01
After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…
Comparison of a 3-D GPU-Assisted Maxwell Code and Ray Tracing for Reflectometry on ITER
NASA Astrophysics Data System (ADS)
Gady, Sarah; Kubota, Shigeyuki; Johnson, Irena
2015-11-01
Electromagnetic wave propagation and scattering in magnetized plasmas are important diagnostics for high temperature plasmas. 1-D and 2-D full-wave codes are standard tools for measurements of the electron density profile and fluctuations; however, ray tracing results have shown that beam propagation in tokamak plasmas is inherently a 3-D problem. The GPU-Assisted Maxwell Code utilizes the FDTD (Finite-Difference Time-Domain) method for solving the Maxwell equations with the cold plasma approximation in a 3-D geometry. Parallel processing with GPGPU (General-Purpose computing on Graphics Processing Units) is used to accelerate the computation. Previously, we reported on initial comparisons of the code results to 1-D numerical and analytical solutions, where the size of the computational grid was limited by the on-board memory of the GPU. In the current study, this limitation is overcome by using domain decomposition and an additional GPU. As a practical application, this code is used to study the current design of the ITER Low Field Side Reflectometer (LSFR) for the Equatorial Port Plug 11 (EPP11). A detailed examination of Gaussian beam propagation in the ITER edge plasma will be presented, as well as comparisons with ray tracing. This work was made possible by funding from the Department of Energy for the Summer Undergraduate Laboratory Internship (SULI) program. This work is supported by the US DOE Contract No.DE-AC02-09CH11466 and DE-FG02-99-ER54527.
NASA Astrophysics Data System (ADS)
Chun, Tae Yoon; Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho
2018-06-01
In this paper, we propose two multirate generalised policy iteration (GPI) algorithms applied to discrete-time linear quadratic regulation problems. The proposed algorithms are extensions of the existing GPI algorithm that consists of the approximate policy evaluation and policy improvement steps. The two proposed schemes, named heuristic dynamic programming (HDP) and dual HDP (DHP), based on multirate GPI, use multi-step estimation (M-step Bellman equation) at the approximate policy evaluation step for estimating the value function and its gradient called costate, respectively. Then, we show that these two methods with the same update horizon can be considered equivalent in the iteration domain. Furthermore, monotonically increasing and decreasing convergences, so called value iteration (VI)-mode and policy iteration (PI)-mode convergences, are proved to hold for the proposed multirate GPIs. Further, general convergence properties in terms of eigenvalues are also studied. The data-driven online implementation methods for the proposed HDP and DHP are demonstrated and finally, we present the results of numerical simulations performed to verify the effectiveness of the proposed methods.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
PARALLELISATION OF THE MODEL-BASED ITERATIVE RECONSTRUCTION ALGORITHM DIRA.
Örtenberg, A; Magnusson, M; Sandborg, M; Alm Carlsson, G; Malusek, A
2016-06-01
New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelisation of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelisation of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelised using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelisation of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelisation with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Identification of stable areas in unreferenced laser scans for automated geomorphometric monitoring
NASA Astrophysics Data System (ADS)
Wujanz, Daniel; Avian, Michael; Krueger, Daniel; Neitzel, Frank
2018-04-01
Current research questions in the field of geomorphology focus on the impact of climate change on several processes subsequently causing natural hazards. Geodetic deformation measurements are a suitable tool to document such geomorphic mechanisms, e.g. by capturing a region of interest with terrestrial laser scanners which results in a so-called 3-D point cloud. The main problem in deformation monitoring is the transformation of 3-D point clouds captured at different points in time (epochs) into a stable reference coordinate system. In this contribution, a surface-based registration methodology is applied, termed the iterative closest proximity algorithm (ICProx), that solely uses point cloud data as input, similar to the iterative closest point algorithm (ICP). The aim of this study is to automatically classify deformations that occurred at a rock glacier and an ice glacier, as well as in a rockfall area. For every case study, two epochs were processed, while the datasets notably differ in terms of geometric characteristics, distribution and magnitude of deformation. In summary, the ICProx algorithm's classification accuracy is 70 % on average in comparison to reference data.
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Fast generating Greenberger-Horne-Zeilinger state via iterative interaction pictures
NASA Astrophysics Data System (ADS)
Huang, Bi-Hua; Chen, Ye-Hong; Wu, Qi-Cheng; Song, Jie; Xia, Yan
2016-10-01
We delve a little deeper into the construction of shortcuts to adiabatic passage for three-level systems by iterative interaction picture (multiple Schrödinger dynamics). As an application example, we use the deduced iterative based shortcuts to rapidly generate the Greenberger-Horne-Zeilinger (GHZ) state in a three-atom system with the help of quantum Zeno dynamics. Numerical simulation shows the dynamics designed by the iterative picture method is physically feasible and the shortcut scheme performs much better than that using the conventional adiabatic passage techniques. Also, the influences of various decoherence processes are discussed by numerical simulation and the results prove that the scheme is fast and robust against decoherence and operational imperfection.
An iterative solver for the 3D Helmholtz equation
NASA Astrophysics Data System (ADS)
Belonosov, Mikhail; Dmitriev, Maxim; Kostin, Victor; Neklyudov, Dmitry; Tcheverda, Vladimir
2017-09-01
We develop a frequency-domain iterative solver for numerical simulation of acoustic waves in 3D heterogeneous media. It is based on the application of a unique preconditioner to the Helmholtz equation that ensures convergence for Krylov subspace iteration methods. Effective inversion of the preconditioner involves the Fast Fourier Transform (FFT) and numerical solution of a series of boundary value problems for ordinary differential equations. Matrix-by-vector multiplication for iterative inversion of the preconditioned matrix involves inversion of the preconditioner and pointwise multiplication of grid functions. Our solver has been verified by benchmarking against exact solutions and a time-domain solver.
Method and apparatus for determining and utilizing a time-expanded decision network
NASA Technical Reports Server (NTRS)
de Weck, Olivier (Inventor); Silver, Matthew (Inventor)
2012-01-01
A method, apparatus and computer program for determining and utilizing a time-expanded decision network is presented. A set of potential system configurations is defined. Next, switching costs are quantified to create a "static network" that captures the difficulty of switching among these configurations. A time-expanded decision network is provided by expanding the static network in time, including chance and decision nodes. Minimum cost paths through the network are evaluated under plausible operating scenarios. The set of initial design configurations are iteratively modified to exploit high-leverage switches and the process is repeated to convergence. Time-expanded decision networks are applicable, but not limited to, the design of systems, products, services and contracts.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
Ferguson, Melanie; Leighton, Paul; Brandreth, Marian; Wharrad, Heather
2018-05-02
To develop content for a series of interactive video tutorials (or reusable learning objects, RLOs) for first-time adult hearing aid users, to enhance knowledge of hearing aids and communication. RLO content was based on an electronically-delivered Delphi review, workshops, and iterative peer-review and feedback using a mixed-methods participatory approach. An expert panel of 33 hearing healthcare professionals, and workshops involving 32 hearing aid users and 11 audiologists. This ensured that social, emotional and practical experiences of the end-user alongside clinical validity were captured. Content for evidence-based, self-contained RLOs based on pedagogical principles was developed for delivery via DVD for television, PC or internet. Content was developed based on Delphi review statements about essential information that reached consensus (≥90%), visual representations of relevant concepts relating to hearing aids and communication, and iterative peer-review and feedback of content. This participatory approach recognises and involves key stakeholders in the design process to create content for a user-friendly multimedia educational intervention, to supplement the clinical management of first-time hearing aid users. We propose participatory methodologies are used in the development of content for e-learning interventions in hearing-related research and clinical practice.
Applying matching pursuit decomposition time-frequency processing to UGS footstep classification
NASA Astrophysics Data System (ADS)
Larsen, Brett W.; Chung, Hugh; Dominguez, Alfonso; Sciacca, Jacob; Kovvali, Narayan; Papandreou-Suppappola, Antonia; Allee, David R.
2013-06-01
The challenge of rapid footstep detection and classification in remote locations has long been an important area of study for defense technology and national security. Also, as the military seeks to create effective and disposable unattended ground sensors (UGS), computational complexity and power consumption have become essential considerations in the development of classification techniques. In response to these issues, a research project at the Flexible Display Center at Arizona State University (ASU) has experimented with footstep classification using the matching pursuit decomposition (MPD) time-frequency analysis method. The MPD provides a parsimonious signal representation by iteratively selecting matched signal components from a pre-determined dictionary. The resulting time-frequency representation of the decomposed signal provides distinctive features for different types of footsteps, including footsteps during walking or running activities. The MPD features were used in a Bayesian classification method to successfully distinguish between the different activities. The computational cost of the iterative MPD algorithm was reduced, without significant loss in performance, using a modified MPD with a dictionary consisting of signals matched to cadence temporal gait patterns obtained from real seismic measurements. The classification results were demonstrated with real data from footsteps under various conditions recorded using a low-cost seismic sensor.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
NASA Astrophysics Data System (ADS)
Prathabrao, M.; Nawawi, Azli; Sidek, Noor Azizah
2017-04-01
Radio Frequency Identification (RFID) system has multiple benefits which can improve the operational efficiency of the organization. The advantages are the ability to record data systematically and quickly, reducing human errors and system errors, update the database automatically and efficiently. It is often more readers (reader) is needed for the installation purposes in RFID system. Thus, it makes the system more complex. As a result, RFID network planning process is needed to ensure the RFID system works perfectly. The planning process is also considered as an optimization process and power adjustment because the coordinates of each RFID reader to be determined. Therefore, algorithms inspired by the environment (Algorithm Inspired by Nature) is often used. In the study, PSO algorithm is used because it has few number of parameters, the simulation time is fast, easy to use and also very practical. However, PSO parameters must be adjusted correctly, for robust and efficient usage of PSO. Failure to do so may result in disruption of performance and results of PSO optimization of the system will be less good. To ensure the efficiency of PSO, this study will examine the effects of two parameters on the performance of PSO Algorithm in RFID tag coverage optimization. The parameters to be studied are the swarm size and iteration number. In addition to that, the study will also recommend the most optimal adjustment for both parameters that is, 200 for the no. iterations and 800 for the no. of swarms. Finally, the results of this study will enable PSO to operate more efficiently in order to optimize RFID network planning system.
Lipstein, Ellen A; Britto, Maria T
2015-08-01
In the context of pediatric chronic conditions, patients and families are called upon repeatedly to make treatment decisions. However, little is known about how their decision making evolves over time. The objective was to understand parents' processes for treatment decision making in pediatric chronic conditions. We conducted a qualitative, prospective longitudinal study using recorded clinic visits and individual interviews. After consent was obtained from health care providers, parents, and patients, clinic visits during which treatment decisions were expected to be discussed were video-recorded. Parents then participated in sequential telephone interviews about their decision-making experience. Data were coded by 2 people and analyzed using framework analysis with sequential, time-ordered matrices. 21 families, including 29 parents, participated in video-recording and interviews. We found 3 dominant patterns of decision evolution. Each consisted of a series of decision events, including conversations, disease flares, and researching of treatment options. Within all 3 patterns there were both constant and evolving elements of decision making, such as role perceptions and treatment expectations, respectively. After parents made a treatment decision, they immediately turned to the next decision related to the chronic condition, creating an iterative cycle. In this study, decision making was an iterative process occurring in 3 distinct patterns. Understanding these patterns and the varying elements of parents' decision processes is an essential step toward developing interventions that are appropriate to the setting and that capitalize on the skills families may develop as they gain experience with a chronic condition. Future research should also consider the role of children and adolescents in this decision process. © The Author(s) 2015.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Validation of the United States Marine Corps Qualified Candidate Population Model
2003-03-01
time. Fields are created in the database to support this forecasting. User forms and a macro are programmed in Microsoft VBA to develop the...at 0.001. To accomplish 50,000 iterations of a minimization problem, this study wrote a macro in the VBA programming language that guides the solver...success in the commissioning process. **To improve the diagnostics of this propensity model, other factors were considered as well. Applying SQL
Survival and in-vessel redistribution of beryllium droplets after ITER disruptions
NASA Astrophysics Data System (ADS)
Vignitchouk, L.; Ratynskaia, S.; Tolias, P.; Pitts, R. A.; De Temmerman, G.; Lehnen, M.; Kiramov, D.
2018-07-01
The motion and temperature evolution of beryllium droplets produced by first wall surface melting after ITER major disruptions and vertical displacement events mitigated during the current quench are simulated by the MIGRAINe dust dynamics code. These simulations employ an updated physical model which addresses droplet-plasma interaction in ITER-relevant regimes characterized by magnetized electron collection and thin-sheath ion collection, as well as electron emission processes induced by electron and high-Z ion impacts. The disruption scenarios have been implemented from DINA simulations of the time-evolving plasma parameters, while the droplet injection points are set to the first-wall locations expected to receive the highest thermal quench heat flux according to field line tracing studies. The droplet size, speed and ejection angle are varied within the range of currently available experimental and theoretical constraints, and the final quantities of interest are obtained by weighting single-trajectory output with different size and speed distributions. Detailed estimates of droplet solidification into dust grains and their subsequent deposition in the vessel are obtained. For representative distributions of the droplet injection parameters, the results indicate that at most a few percents of the beryllium mass initially injected is converted into solid dust, while the remaining mass either vaporizes or forms liquid splashes on the wall. Simulated in-vessel spatial distributions are also provided for the surviving dust, with the aim of providing guidance for planned dust diagnostic, retrieval and clean-up systems on ITER.
Iterative nonlinear joint transform correlation for the detection of objects in cluttered scenes
NASA Astrophysics Data System (ADS)
Haist, Tobias; Tiziani, Hans J.
1999-03-01
An iterative correlation technique with digital image processing in the feedback loop for the detection of small objects in cluttered scenes is proposed. A scanning aperture is combined with the method in order to improve the immunity against noise and clutter. Multiple reference objects or different views of one object are processed in parallel. We demonstrate the method by detecting a noisy and distorted face in a crowd with a nonlinear joint transform correlator.
Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm
NASA Astrophysics Data System (ADS)
Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang
2017-10-01
A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.
Development of an evidence-based review with recommendations using an online iterative process.
Rudmik, Luke; Smith, Timothy L
2011-01-01
The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Performing Systematic Literature Reviews with Novices: An Iterative Approach
ERIC Educational Resources Information Center
Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza
2014-01-01
Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…
Retention time alignment of LC/MS data by a divide-and-conquer algorithm.
Zhang, Zhongqi
2012-04-01
Liquid chromatography-mass spectrometry (LC/MS) has become the method of choice for characterizing complex mixtures. These analyses often involve quantitative comparison of components in multiple samples. To achieve automated sample comparison, the components of interest must be detected and identified, and their retention times aligned and peak areas calculated. This article describes a simple pairwise iterative retention time alignment algorithm, based on the divide-and-conquer approach, for alignment of ion features detected in LC/MS experiments. In this iterative algorithm, ion features in the sample run are first aligned with features in the reference run by applying a single constant shift of retention time. The sample chromatogram is then divided into two shorter chromatograms, which are aligned to the reference chromatogram the same way. Each shorter chromatogram is further divided into even shorter chromatograms. This process continues until each chromatogram is sufficiently narrow so that ion features within it have a similar retention time shift. In six pairwise LC/MS alignment examples containing a total of 6507 confirmed true corresponding feature pairs with retention time shifts up to five peak widths, the algorithm successfully aligned these features with an error rate of 0.2%. The alignment algorithm is demonstrated to be fast, robust, fully automatic, and superior to other algorithms. After alignment and gap-filling of detected ion features, their abundances can be tabulated for direct comparison between samples.
NASA Astrophysics Data System (ADS)
Zhou, Feng; Chen, Guoxian; Huang, Yuefei; Yang, Jerry Zhijian; Feng, Hui
2013-04-01
A new geometrical conservative interpolation on unstructured meshes is developed for preserving still water equilibrium and positivity of water depth at each iteration of mesh movement, leading to an adaptive moving finite volume (AMFV) scheme for modeling flood inundation over dry and complex topography. Unlike traditional schemes involving position-fixed meshes, the iteration process of the AFMV scheme moves a fewer number of the meshes adaptively in response to flow variables calculated in prior solutions and then simulates their posterior values on the new meshes. At each time step of the simulation, the AMFV scheme consists of three parts: an adaptive mesh movement to shift the vertices position, a geometrical conservative interpolation to remap the flow variables by summing the total mass over old meshes to avoid the generation of spurious waves, and a partial differential equations(PDEs) discretization to update the flow variables for a new time step. Five different test cases are presented to verify the computational advantages of the proposed scheme over nonadaptive methods. The results reveal three attractive features: (i) the AMFV scheme could preserve still water equilibrium and positivity of water depth within both mesh movement and PDE discretization steps; (ii) it improved the shock-capturing capability for handling topographic source terms and wet-dry interfaces by moving triangular meshes to approximate the spatial distribution of time-variant flood processes; (iii) it was able to solve the shallow water equations with a relatively higher accuracy and spatial-resolution with a lower computational cost.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
Generalized conjugate-gradient methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1991-01-01
A generalized conjugate-gradient method is used to solve the two-dimensional, compressible Navier-Stokes equations of fluid flow. The equations are discretized with an implicit, upwind finite-volume formulation. Preconditioning techniques are incorporated into the new solver to accelerate convergence of the overall iterative method. The superiority of the new solver is demonstrated by comparisons with a conventional line Gauss-Siedel Relaxation solver. Computational test results for transonic flow (trailing edge flow in a transonic turbine cascade) and hypersonic flow (M = 6.0 shock-on-shock phenoena on a cylindrical leading edge) are presented. When applied to the transonic cascade case, the new solver is 4.4 times faster in terms of number of iterations and 3.1 times faster in terms of CPU time than the Relaxation solver. For the hypersonic shock case, the new solver is 3.0 times faster in terms of number of iterations and 2.2 times faster in terms of CPU time than the Relaxation solver.
Holtkamp, Norbert
2018-01-09
ITER (in Latin âthe wayâ) is designed to demonstrate the scientific and technological feasibility of fusion energy. Fusion is the process by which two light atomic nuclei combine to form a heavier over one and thus release energy. In the fusion process two isotopes of hydrogen â deuterium and tritium â fuse together to form a helium atom and a neutron. Thus fusion could provide large scale energy production without greenhouse effects; essentially limitless fuel would be available all over the world. The principal goals of ITER are to generate 500 megawatts of fusion power for periods of 300 to 500 seconds with a fusion power multiplication factor, Q, of at least 10. Q ? 10 (input power 50 MW / output power 500 MW). The ITER Organization was officially established in Cadarache, France, on 24 October 2007. The seven members engaged in the project â China, the European Union, India, Japan, Korea, Russia and the United States â represent more than half the worldâs population. The costs for ITER are shared by the seven members. The cost for the construction will be approximately 5.5 billion Euros, a similar amount is foreseen for the twenty-year phase of operation and the subsequent decommissioning.
Neural Generalized Predictive Control: A Newton-Raphson Implementation
NASA Technical Reports Server (NTRS)
Soloway, Donald; Haley, Pamela J.
1997-01-01
An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2014-02-28
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations.
Lagardère, Louis; Lipparini, Filippo; Polack, Étienne; Stamm, Benjamin; Cancès, Éric; Schnieders, Michael; Ren, Pengyu; Maday, Yvon; Piquemal, Jean-Philip
2015-01-01
In this paper, we present a scalable and efficient implementation of point dipole-based polarizable force fields for molecular dynamics (MD) simulations with periodic boundary conditions (PBC). The Smooth Particle-Mesh Ewald technique is combined with two optimal iterative strategies, namely, a preconditioned conjugate gradient solver and a Jacobi solver in conjunction with the Direct Inversion in the Iterative Subspace for convergence acceleration, to solve the polarization equations. We show that both solvers exhibit very good parallel performances and overall very competitive timings in an energy-force computation needed to perform a MD step. Various tests on large systems are provided in the context of the polarizable AMOEBA force field as implemented in the newly developed Tinker-HP package which is the first implementation for a polarizable model making large scale experiments for massively parallel PBC point dipole models possible. We show that using a large number of cores offers a significant acceleration of the overall process involving the iterative methods within the context of spme and a noticeable improvement of the memory management giving access to very large systems (hundreds of thousands of atoms) as the algorithm naturally distributes the data on different cores. Coupled with advanced MD techniques, gains ranging from 2 to 3 orders of magnitude in time are now possible compared to non-optimized, sequential implementations giving new directions for polarizable molecular dynamics in periodic boundary conditions using massively parallel implementations. PMID:26512230
Approximate optimal guidance for the advanced launch system
NASA Technical Reports Server (NTRS)
Feeley, T. S.; Speyer, J. L.
1993-01-01
A real-time guidance scheme for the problem of maximizing the payload into orbit subject to the equations of motion for a rocket over a spherical, non-rotating earth is presented. An approximate optimal launch guidance law is developed based upon an asymptotic expansion of the Hamilton - Jacobi - Bellman or dynamic programming equation. The expansion is performed in terms of a small parameter, which is used to separate the dynamics of the problem into primary and perturbation dynamics. For the zeroth-order problem the small parameter is set to zero and a closed-form solution to the zeroth-order expansion term of Hamilton - Jacobi - Bellman equation is obtained. Higher-order terms of the expansion include the effects of the neglected perturbation dynamics. These higher-order terms are determined from the solution of first-order linear partial differential equations requiring only the evaluation of quadratures. This technique is preferred as a real-time, on-line guidance scheme to alternative numerical iterative optimization schemes because of the unreliable convergence properties of these iterative guidance schemes and because the quadratures needed for the approximate optimal guidance law can be performed rapidly and by parallel processing. Even if the approximate solution is not nearly optimal, when using this technique the zeroth-order solution always provides a path which satisfies the terminal constraints. Results for two-degree-of-freedom simulations are presented for the simplified problem of flight in the equatorial plane and compared to the guidance scheme generated by the shooting method which is an iterative second-order technique.
On the safety of ITER accelerators.
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate -1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER.
NASA Technical Reports Server (NTRS)
Brand, J. C.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
On the safety of ITER accelerators
Li, Ge
2013-01-01
Three 1 MV/40A accelerators in heating neutral beams (HNB) are on track to be implemented in the International Thermonuclear Experimental Reactor (ITER). ITER may produce 500 MWt of power by 2026 and may serve as a green energy roadmap for the world. They will generate −1 MV 1 h long-pulse ion beams to be neutralised for plasma heating. Due to frequently occurring vacuum sparking in the accelerators, the snubbers are used to limit the fault arc current to improve ITER safety. However, recent analyses of its reference design have raised concerns. General nonlinear transformer theory is developed for the snubber to unify the former snubbers' different design models with a clear mechanism. Satisfactory agreement between theory and tests indicates that scaling up to a 1 MV voltage may be possible. These results confirm the nonlinear process behind transformer theory and map out a reliable snubber design for a safer ITER. PMID:24008267
NASA Astrophysics Data System (ADS)
Sampoorna, M.; Trujillo Bueno, J.
2010-04-01
The linearly polarized solar limb spectrum that is produced by scattering processes contains a wealth of information on the physical conditions and magnetic fields of the solar outer atmosphere, but the modeling of many of its strongest spectral lines requires solving an involved non-local thermodynamic equilibrium radiative transfer problem accounting for partial redistribution (PRD) effects. Fast radiative transfer methods for the numerical solution of PRD problems are also needed for a proper treatment of hydrogen lines when aiming at realistic time-dependent magnetohydrodynamic simulations of the solar chromosphere. Here we show how the two-level atom PRD problem with and without polarization can be solved accurately and efficiently via the application of highly convergent iterative schemes based on the Gauss-Seidel and successive overrelaxation (SOR) radiative transfer methods that had been previously developed for the complete redistribution case. Of particular interest is the Symmetric SOR method, which allows us to reach the fully converged solution with an order of magnitude of improvement in the total computational time with respect to the Jacobi-based local accelerated lambda iteration method.
Wei, Jianming; Zhang, Youan; Sun, Meimei; Geng, Baoliang
2017-09-01
This paper presents an adaptive iterative learning control scheme for a class of nonlinear systems with unknown time-varying delays and control direction preceded by unknown nonlinear backlash-like hysteresis. Boundary layer function is introduced to construct an auxiliary error variable, which relaxes the identical initial condition assumption of iterative learning control. For the controller design, integral Lyapunov function candidate is used, which avoids the possible singularity problem by introducing hyperbolic tangent funciton. After compensating for uncertainties with time-varying delays by combining appropriate Lyapunov-Krasovskii function with Young's inequality, an adaptive iterative learning control scheme is designed through neural approximation technique and Nussbaum function method. On the basis of the hyperbolic tangent function's characteristics, the system output is proved to converge to a small neighborhood of the desired trajectory by constructing Lyapunov-like composite energy function (CEF) in two cases, while keeping all the closed-loop signals bounded. Finally, a simulation example is presented to verify the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement
NASA Astrophysics Data System (ADS)
O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.
2000-03-01
In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.
NASA Technical Reports Server (NTRS)
Wolf, Stephen W. D.; Goodyer, Michael J.
1988-01-01
Following the realization that a simple iterative strategy for bringing the flexible walls of two-dimensional test sections to streamline contours was too slow for practical use, Judd proposed, developed, and placed into service what was the first Predictive Strategy. The Predictive Strategy reduced by 75 percent or more the number of iterations of wall shapes, and therefore the tunnel run-time overhead attributable to the streamlining process, required to reach satisfactory streamlines. The procedures of the Strategy are embodied in the FORTRAN subroutine WAS (standing for Wall Adjustment Strategy) which is written in general form. The essentials of the test section hardware, followed by the underlying aerodynamic theory which forms the basis of the Strategy, are briefly described. The subroutine is then presented as the Appendix, broken down into segments with descriptions of the numerical operations underway in each, with definitions of variables.
Kinetics of carbide formation in the molybdenum-tungsten coatings used in the ITER-like Wall
NASA Astrophysics Data System (ADS)
Maier, H.; Rasinski, M.; von Toussaint, U.; Greuner, H.; Böswirth, B.; Balden, M.; Elgeti, S.; Ruset, C.; Matthews, G. F.
2016-02-01
The kinetics of tungsten carbide formation was investigated for tungsten coatings on carbon fibre composite with a molybdenum interlayer as they are used in the ITER-like Wall in JET. The coatings were produced by combined magnetron sputtering and ion implantation. The investigation was performed by preparing focused ion beam cross sections from samples after heat treatment in argon atmosphere. Baking of the samples was done at temperatures of 1100 °C, 1200 °C, and 1350 °C for hold times between 30 min and 20 h. It was found that the data can be well described by a diffusional random walk with a thermally activated diffusion process. The activation energy was determined to be (3.34 ± 0.11) eV. Predictions for the isothermal lifetime of this coating system were computed from this information.
Experimental investigations of helium cryotrapping by argon frost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mack, A.; Perinic, D.; Murdoch, D.
1992-03-01
At the Karlsruhe Nuclear Research Centre (KfK) cryopumping techniques are being investigated by which the gaseous exhausts from the NET/ITER reactor can be pumped out during the burn-and dwell-times. Cryosorption and cryotrapping are techniques which are suitable for this task. It is the target of the investigations to test the techniques under NET/ITER conditions and to determine optimum design data for a prototype. They involve measurement of the pumping speed as a function of the gas composition, gas flow and loading condition of the pump surfaces. The following parameters are subjected to variations: Ar/He ratio, specific helium volume flow rate,more » cryosurface temperature, process gas composition, impurities in argon trapping gas, three-stage operation and two-stage operation. This paper is a description of the experiments on argon trapping techniques started in 1990. Eleven tests as well as the results derived from them are described.« less
Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D
2008-08-01
The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is characterized in terms of time per megapixels per iteration (TPMI) with units of seconds per megapixels per iteration (or spmi). For the demons algorithm, our CPU implementation yielded largely invariant values of TPMI. The mean TPMIs were 0.527 spmi and 0.335 spmi for the single threading and multithreading cases, respectively, with <2% variation over the considered image data range. For GPU computing, we achieved TPMI =0.00916 spmi with 3.7% variation, indicating optimized memory handling under CUDA. The paradigm of GPU based real-time DIR opens up a host of clinical applications for medical imaging.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Prediction and control of chaotic processes using nonlinear adaptive networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.D.; Barnes, C.W.; Flake, G.W.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
Efficient fuzzy C-means architecture for image segmentation.
Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen
2011-01-01
This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.
NASA Astrophysics Data System (ADS)
Zhang, M.; Zheng, G. Z.; Zheng, W.; Chen, Z.; Yuan, T.; Yang, C.
2016-04-01
The magnetic confinement nuclear fusion experiments require various real-time control applications like plasma control. ITER has designed the Fast Plant System Controller (FPSC) for this job. ITER provided hardware and software standards and guidelines for building a FPSC. In order to develop various real-time FPSC applications efficiently, a flexible real-time software framework called J-TEXT real-time framework (JRTF) is developed by J-TEXT tokamak team. JRTF allowed developers to implement different functions as independent and reusable modules called Application Blocks (AB). The AB developers only need to focus on implementing the control tasks or the algorithms. The timing, scheduling, data sharing and eventing are handled by the JRTF pipelines. JRTF provides great flexibility on developing ABs. Unit test against ABs can be developed easily and ABs can even be used in non-JRTF applications. JRTF also provides interfaces allowing JRTF applications to be configured and monitored at runtime. JRTF is compatible with ITER standard FPSC hardware and ITER (Control, Data Access and Communication) CODAC Core software. It can be configured and monitored using (Experimental Physics and Industrial Control System) EPICS. Moreover the JRTF can be ported to different platforms and be integrated with supervisory control software other than EPICS. The paper presents the design and implementation of JRTF as well as brief test results.
Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?
NASA Astrophysics Data System (ADS)
Swartjes, Ivo; Theune, Mariët
We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.
NASA Astrophysics Data System (ADS)
Wu, Zhixiong; Huang, Rongjin; Huang, ChuanJun; Yang, Yanfang; Huang, Xiongyi; Li, Laifeng
2017-12-01
The Glass-fiber reinforced plastic (GFRP) fabricated by the vacuum bag process was selected as the high voltage electrical insulation and mechanical support for the superconducting joints and the current leads for the ITER Feeder system. To evaluate the cryogenic mechanical properties of the GFRP, the mechanical properties such as the short beam strength (SBS), the tensile strength and the fatigue fracture strength after 30,000 cycles, were measured at 77K in this study. The results demonstrated that the GFRP met the design requirements of ITER.
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.
2007-12-01
Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.
Application of the perturbation iteration method to boundary layer type problems.
Pakdemirli, Mehmet
2016-01-01
The recently developed perturbation iteration method is applied to boundary layer type singular problems for the first time. As a preliminary work on the topic, the simplest algorithm of PIA(1,1) is employed in the calculations. Linear and nonlinear problems are solved to outline the basic ideas of the new solution technique. The inner and outer solutions are determined with the iteration algorithm and matched to construct a composite expansion valid within all parts of the domain. The solutions are contrasted with the available exact or numerical solutions. It is shown that the perturbation-iteration algorithm can be effectively used for solving boundary layer type problems.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
Conjecture Mapping to Optimize the Educational Design Research Process
ERIC Educational Resources Information Center
Wozniak, Helen
2015-01-01
While educational design research promotes closer links between practice and theory, reporting its outcomes from iterations across multiple contexts is often constrained by the volumes of data generated, and the context bound nature of the research outcomes. Reports tend to focus on a single iteration of implementation without further research to…
RELAP5 Model of the First Wall/Blanket Primary Heat Transfer System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popov, Emilian L; Yoder Jr, Graydon L; Kim, Seokho H
2010-06-01
ITER inductive power operation is modeled and simulated using a system level computer code to evaluate the behavior of the Primary Heat Transfer System (PHTS) and predict parameter operational ranges. The control algorithm strategy and derivation are summarized in this report as well. A major feature of ITER is pulsed operation. The plasma does not burn continuously, but the power is pulsed with large periods of zero power between pulses. This feature requires active temperature control to maintain a constant blanket inlet temperature and requires accommodation of coolant thermal expansion during the pulse. In view of the transient nature ofmore » the power (plasma) operation state a transient system thermal-hydraulics code was selected: RELAP5. The code has a well-documented history for nuclear reactor transient analyses, it has been benchmarked against numerous experiments, and a large user database of commonly accepted modeling practices exists. The process of heat deposition and transfer in the blanket modules is multi-dimensional and cannot be accurately captured by a one-dimensional code such as RELAP5. To resolve this, a separate CFD calculation of blanket thermal power evolution was performed using the 3-D SC/Tetra thermofluid code. A 1D-3D co-simulation more realistically models FW/blanket internal time-dependent thermal inertia while eliminating uncertainties in the time constant assumed in a 1-D system code. Blanket water outlet temperature and heat release histories for any given ITER pulse operation scenario are calculated. These results provide the basis for developing time dependent power forcing functions which are used as input in the RELAP5 calculations.« less
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
Objective performance assessment of five computed tomography iterative reconstruction algorithms.
Omotayo, Azeez; Elbakri, Idris
2016-11-22
Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
Scientific and technical challenges on the road towards fusion electricity
NASA Astrophysics Data System (ADS)
Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.
2017-10-01
The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
Li, Nan; Zarepisheh, Masoud; Uribe-Sanchez, Andres; Moore, Kevin; Tian, Zhen; Zhen, Xin; Graves, Yan Jiang; Gautier, Quentin; Mell, Loren; Zhou, Linghong; Jia, Xun; Jiang, Steve
2013-12-21
Adaptive radiation therapy (ART) can reduce normal tissue toxicity and/or improve tumor control through treatment adaptations based on the current patient anatomy. Developing an efficient and effective re-planning algorithm is an important step toward the clinical realization of ART. For the re-planning process, manual trial-and-error approach to fine-tune planning parameters is time-consuming and is usually considered unpractical, especially for online ART. It is desirable to automate this step to yield a plan of acceptable quality with minimal interventions. In ART, prior information in the original plan is available, such as dose-volume histogram (DVH), which can be employed to facilitate the automatic re-planning process. The goal of this work is to develop an automatic re-planning algorithm to generate a plan with similar, or possibly better, DVH curves compared with the clinically delivered original plan. Specifically, our algorithm iterates the following two loops. An inner loop is the traditional fluence map optimization, in which we optimize a quadratic objective function penalizing the deviation of the dose received by each voxel from its prescribed or threshold dose with a set of fixed voxel weighting factors. In outer loop, the voxel weighting factors in the objective function are adjusted according to the deviation of the current DVH curves from those in the original plan. The process is repeated until the DVH curves are acceptable or maximum iteration step is reached. The whole algorithm is implemented on GPU for high efficiency. The feasibility of our algorithm has been demonstrated with three head-and-neck cancer IMRT cases, each having an initial planning CT scan and another treatment CT scan acquired in the middle of treatment course. Compared with the DVH curves in the original plan, the DVH curves in the resulting plan using our algorithm with 30 iterations are better for almost all structures. The re-optimization process takes about 30 s using our in-house optimization engine.
Flutter optimization in fighter aircraft design
NASA Technical Reports Server (NTRS)
Triplett, W. E.
1984-01-01
The efficient design of aircraft structure involves a series of compromises among various engineering disciplines. These compromises are necessary to ensure the best overall design. To effectively reconcile the various technical constraints requires a number of design iterations, with the accompanying long elapsed time. Automated procedures can reduce the elapsed time, improve productivity and hold the promise of optimum designs which may be missed by batch processing. Several examples are given of optimization applications including aeroelastic constraints. Particular attention is given to the success or failure of each example and the lessons learned. The specific applications are shown. The final two applications were made recently.
NASA Technical Reports Server (NTRS)
Joncas, K. P.
1972-01-01
Concepts and techniques for identifying and simulating both the steady state and dynamic characteristics of electrical loads for use during integrated system test and evaluation are discussed. The investigations showed that it is feasible to design and develop interrogation and simulation equipment to perform the desired functions. During the evaluation, actual spacecraft loads were interrogated by stimulating the loads with their normal input voltage and measuring the resultant voltage and current time histories. Elements of the circuits were optimized by an iterative process of selecting element values and comparing the time-domain response of the model with those obtained from the real equipment during interrogation.
Development of the intoxicated personality scale.
Ward, Rose Marie; Brinkman, Craig S; Miller, Ashlin; Doolittle, James J
2015-01-01
To develop the Intoxicated Personality Scale (IPS). Data were collected from 436 college students via an online survey. Through an iterative measurement development process, the resulting IPS was created. The 5 subscales (Good Time, Risky Choices, Risky Sex, Emotional, and Introvert) of the IPS positively related to alcohol consumption, alcohol problems, drinking motives, alcohol expectancies, and personality. The results suggest that the Intoxicated Personality Scale may be a useful tool for predicting problematic alcohol consumption, alcohol expectancies, and drinking motives.
2016-08-05
technique which used unobserved ”intermediate” variables to break a high-dimensional estimation problem such as least- squares (LS) optimization of a large...Least Squares (GEM-LS). The estimator is iterative and the work in this time period focused on characterizing the convergence properties of this...ap- proach by relaxing the statistical assumptions which is termed the Relaxed Approximate Graph-Structured Recursive Least Squares (RAGS-RLS). This
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
APRN Usability Testing of a Tailored Computer-Mediated Health Communication Program
Lin, Carolyn A.; Neafsey, Patricia J.; Anderson, Elizabeth
2010-01-01
This study tested the usability of a touch-screen enabled “Personal Education Program” (PEP) with Advanced Practice Registered Nurses (APRN). The PEP is designed to enhance medication adherence and reduce adverse self-medication behaviors in older adults with hypertension. An iterative research process was employed, which involved the use of: (1) pre-trial focus groups to guide the design of system information architecture, (2) two different cycles of think-aloud trials to test the software interface, and (3) post-trial focus groups to gather feedback on the think-aloud studies. Results from this iterative usability testing process were utilized to systematically modify and improve the three PEP prototype versions—the pilot, Prototype-1 and Prototype-2. Findings contrasting the two separate think-aloud trials showed that APRN users rated the PEP system usability, system information and system-use satisfaction at a moderately high level between trials. In addition, errors using the interface were reduced by 76 percent and the interface time was reduced by 18.5 percent between the two trials. The usability testing processes employed in this study ensured an interface design adapted to APRNs' needs and preferences to allow them to effectively utilize the computer-mediated health-communication technology in a clinical setting. PMID:19940619
Extending substructure based iterative solvers to multiple load and repeated analyses
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1993-01-01
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.
Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian
2017-03-01
Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
NASA Astrophysics Data System (ADS)
Xiao, Guorui; Mayer, Michael; Heck, Bernhard; Sui, Lifen; Cong, Mingri
2017-04-01
Integer ambiguity resolution (AR) can significantly shorten the convergence time and improve the accuracy of Precise Point Positioning (PPP). Phase fractional cycle biases (FCB) originating from satellites destroy the integer nature of carrier phase ambiguities. To isolate the satellite FCB, observations from a global reference network are required. Firstly, float ambiguities containing FCBs are obtained by PPP processing. Secondly, the least squares method (LSM) is adopted to recover FCBs from all the float ambiguities. Finally, the estimated FCB products can be applied by the user to achieve PPP-AR. During the estimation of FCB, the LSM step can be very time-consuming, considering the large number of observations from hundreds of stations and thousands of epochs. In addition, iterations are required to deal with the one-cycle inconsistency among observations. Since the integer ambiguities are derived by directly rounding float ambiguities, the one-cycle inconsistency arises whenever the fractional parts of float ambiguities exceed the rounding boundary (e.g., 0.5 and -0.5). The iterations of LSM and the large number of observations require a long time to finish the estimation. Consequently, only a sparse global network containing a limited number of stations was processed in former research. In this paper, we propose to isolate the FCB based on a Kalman filter. The large number of observations is handled epoch-by-epoch, which significantly reduces the dimension of the involved matrix and accelerates the computation. In addition, it is also suitable for real-time applications. As for the one-cycle inconsistency, a pre-elimination method is developed to avoid the iteration of the whole process. According to the analysis of the derived satellite FCB products, we find that both wide-lane (WL) and narrow-lane (NL) FCB are very stable over time (e.g., WL FCB over several days rsp. NL FCB over tens of minutes). The stability implies that the satellite FCB can be removed by previous estimation. After subtraction of the satellite FCB, the receiver FCB can be determined. Theoretically, the receiver FCBs derived from different satellite observations should be the same for a single station. Thereby, the one-cycle inconsistency among satellites can be detected and eliminated by adjusting the corresponding receiver FCB. Here, stations can be handled individually to obtain "clean" FCB observations. In an experiment, 24 h observations from 200 stations are processed to estimate GPS FCB. The process finishes in one hour using a personal computer. The estimated WL FCB has a good consistency with existing WL FCB products (e.g., CNES, WHU-SGG). All differences are within ± 0.1 cycles, which indicates the correctness of the proposed approach. For NL FCB, all differences are within ± 0.2 cycles. Concerning the NL wavelength (10.7 cm), the slightly worse NL FCB may be ascribed to different PPP processing strategies. The state-based approach of the Kalman filter also allows for a more realistic modeling of stochastic parameters, which will be investigated in future research.
A real-time n/γ digital pulse shape discriminator based on FPGA.
Li, Shiping; Xu, Xiufeng; Cao, Hongrui; Yuan, Guoliang; Yang, Qingwei; Yin, Zejie
2013-02-01
A FPGA-based real-time digital pulse shape discriminator has been employed to distinguish between neutrons (n) and gammas (γ) in the Neutron Flux Monitor (NFM) for International Thermonuclear Experimental Reactor (ITER). The discriminator takes advantages of the Field Programmable Gate Array (FPGA) parallel and pipeline process capabilities to carry out the real-time sifting of neutrons in n/γ mixed radiation fields, and uses the rise time and amplitude inspection techniques simultaneously as the discrimination algorithm to observe good n/γ separation. Some experimental results have been presented which show that this discriminator can realize the anticipated goals of NFM perfectly with its excellent discrimination quality and zero dead time. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wilson, J. R.; Bonoli, P. T.
2015-02-01
Ion cyclotron range of frequency (ICRF) heating is foreseen as an integral component of the initial ITER operation. The status of ICRF preparations for ITER and supporting research were updated in the 2007 [Gormezano et al., Nucl. Fusion 47, S285 (2007)] report on the ITER physics basis. In this report, we summarize progress made toward the successful application of ICRF power on ITER since that time. Significant advances have been made in support of the technical design by development of new techniques for arc protection, new algorithms for tuning and matching, carrying out experimental tests of more ITER like antennas and demonstration on mockups that the design assumptions are correct. In addition, new applications of the ICRF system, beyond just bulk heating, have been proposed and explored.
Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.
Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods. PMID:27362654
Development of Vertical Cable Seismic System (3)
NASA Astrophysics Data System (ADS)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Mizohata, S.; Ishikawa, K.
2013-12-01
The VCS (Vertical Cable Seismic) is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of the survey are from 100m up to 2100m. The target of the survey includes not only hydrothermal deposit but oil and gas exploration. Through these experiments, our VCS data acquisition system has been completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system are available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed another approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In the data acquisition stage, we estimate the position of VCS location with slant ranging method from the sea surface. The deep-towed source or ocean bottom source is estimated by SSBL/USBL. The water velocity profile is measured by XCTD. After the data acquisition, we pick the first break times of the VCS recorded data. The estimated positions of shot points and receiver points in the field include the errors. We use these data as initial guesses, we invert iteratively shot and receiver positions to match the travel time data. After several iterations we could finally estimate the most probable positions. Integration of the constraint of VCS hydrophone positions, such as the spacing is 10m, can accelerate the convergence of the iterative inversion and improve results. The accuracy of the estimated positions from the travel time date is enough for the VCS data processing.
Process Improvement for Interinstitutional Research Contracting
Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca
2015-01-01
Abstract Introduction Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. Methods The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Results Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the “business entity” was the research support personnel of both healthcare systems whose “customers” were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). Conclusions The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. PMID:26083433
Process Improvement for Interinstitutional Research Contracting.
Varner, Michael; Logan, Jennifer; Bjorklund, Todd; Whitfield, Jesse; Reed, Peggy; Lesher, Laurie; Sikalis, Amy; Brown, Brent; Drollinger, Sandy; Larrabee, Kristine; Thompson, Kristie; Clark, Erin; Workman, Michael; Boi, Luca
2015-08-01
Sponsored research increasingly requires multiinstitutional collaboration. However, research contracting procedures have become more complicated and time consuming. The perinatal research units of two colocated healthcare systems sought to improve their research contracting processes. The Lean Process, a management practice that iteratively involves team members in root cause analyses and process improvement, was applied to the research contracting process, initially using Process Mapping and then developing Problem Solving Reports. Root cause analyses revealed that the longest delays were the individual contract legal negotiations. In addition, the "business entity" was the research support personnel of both healthcare systems whose "customers" were investigators attempting to conduct interinstitutional research. Development of mutually acceptable research contract templates and language, chain of custody templates, and process development and refinement formats decreased the Notice of Grant Award to Purchase Order time from a mean of 103.5 days in the year prior to Lean Process implementation to 45.8 days in the year after implementation (p = 0.004). The Lean Process can be applied to interinstitutional research contracting with significant improvement in contract implementation. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Guillemaut, C.; Lennholm, M.; Harrison, J.; Carvalho, I.; Valcarcel, D.; Felton, R.; Griph, S.; Hogben, C.; Lucock, R.; Matthews, G. F.; Perez Von Thun, C.; Pitts, R. A.; Wiesen, S.; contributors, JET
2017-04-01
Burning plasmas with 500 MW of fusion power on ITER will rely on partially detached divertor operation to keep target heat loads at manageable levels. Such divertor regimes will be maintained by a real-time control system using the seeding of radiative impurities like nitrogen (N), neon or argon as actuator and one or more diagnostic signals as sensors. Recently, real-time control of divertor detachment has been successfully achieved in Type I ELMy H-mode JET-ITER-like wall discharges by using saturation current (I sat) measurements from divertor Langmuir probes as feedback signals to control the level of N seeding. The degree of divertor detachment is calculated in real-time by comparing the outer target peak I sat measurements to the peak I sat value at the roll-over in order to control the opening of the N injection valve. Real-time control of detachment has been achieved in both fixed and swept strike point experiments. The system has been progressively improved and can now automatically drive the divertor conditions from attached through high recycling and roll-over down to a user-defined level of detachment. Such a demonstration is a successful proof of principle in the context of future operation on ITER which will be extensively equipped with divertor target probes.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Kalman Filter for Calibrating a Telescope Focal Plane
NASA Technical Reports Server (NTRS)
Kang, Bryan; Bayard, David
2006-01-01
The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.
Iterative methods for tomography problems: implementation to a cross-well tomography problem
NASA Astrophysics Data System (ADS)
Karadeniz, M. F.; Weber, G. W.
2018-01-01
The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
ERIC Educational Resources Information Center
McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.
2013-01-01
Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…
ERIC Educational Resources Information Center
Mavrikis, Manolis; Gutierrez-Santos, Sergio
2010-01-01
This paper presents a methodology for the design of intelligent learning environments. We recognise that in the educational technology field, theory development and system-design should be integrated and rely on an iterative process that addresses: (a) the difficulty to elicit precise, concise, and operationalized knowledge from "experts" and (b)…
Item Purification Does Not Always Improve DIF Detection: A Counterexample with Angoff's Delta Plot
ERIC Educational Resources Information Center
Magis, David; Facon, Bruno
2013-01-01
Item purification is an iterative process that is often advocated as improving the identification of items affected by differential item functioning (DIF). With test-score-based DIF detection methods, item purification iteratively removes the items currently flagged as DIF from the test scores to get purified sets of items, unaffected by DIF. The…
Iterative Overlap FDE for Multicode DS-CDMA
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki
Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.
NASA Astrophysics Data System (ADS)
Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias
2015-07-01
Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.
GLobal Integrated Design Environment
NASA Technical Reports Server (NTRS)
Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.
2011-01-01
The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.
Outlier detection for particle image velocimetry data using a locally estimated noise variance
NASA Astrophysics Data System (ADS)
Lee, Yong; Yang, Hua; Yin, ZhouPing
2017-03-01
This work describes an adaptive spatial variable threshold outlier detection algorithm for raw gridded particle image velocimetry data using a locally estimated noise variance. This method is an iterative procedure, and each iteration is composed of a reference vector field reconstruction step and an outlier detection step. We construct the reference vector field using a weighted adaptive smoothing method (Garcia 2010 Comput. Stat. Data Anal. 54 1167-78), and the weights are determined in the outlier detection step using a modified outlier detector (Ma et al 2014 IEEE Trans. Image Process. 23 1706-21). A hard decision on the final weights of the iteration can produce outlier labels of the field. The technical contribution is that the spatial variable threshold motivation is embedded in the modified outlier detector with a locally estimated noise variance in an iterative framework for the first time. It turns out that a spatial variable threshold is preferable to a single spatial constant threshold in complicated flows such as vortex flows or turbulent flows. Synthetic cellular vortical flows with simulated scattered or clustered outliers are adopted to evaluate the performance of our proposed method in comparison with popular validation approaches. This method also turns out to be beneficial in a real PIV measurement of turbulent flow. The experimental results demonstrated that the proposed method yields the competitive performance in terms of outlier under-detection count and over-detection count. In addition, the outlier detection method is computational efficient and adaptive, requires no user-defined parameters, and corresponding implementations are also provided in supplementary materials.
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
Contribution of Tore Supra in preparation of ITER
NASA Astrophysics Data System (ADS)
Saoutic, B.; Abiteboul, J.; Allegretti, L.; Allfrey, S.; Ané, J. M.; Aniel, T.; Argouarch, A.; Artaud, J. F.; Aumenier, M. H.; Balme, S.; Basiuk, V.; Baulaigue, O.; Bayetti, P.; Bécoulet, A.; Bécoulet, M.; Benkadda, M. S.; Benoit, F.; Berger-by, G.; Bernard, J. M.; Bertrand, B.; Beyer, P.; Bigand, A.; Blum, J.; Boilson, D.; Bonhomme, G.; Bottollier-Curtet, H.; Bouchand, C.; Bouquey, F.; Bourdelle, C.; Bourmaud, S.; Brault, C.; Brémond, S.; Brosset, C.; Bucalossi, J.; Buravand, Y.; Cara, P.; Catherine-Dumont, V.; Casati, A.; Chantant, M.; Chatelier, M.; Chevet, G.; Ciazynski, D.; Ciraolo, G.; Clairet, F.; Coatanea-Gouachet, M.; Colas, L.; Commin, L.; Corbel, E.; Corre, Y.; Courtois, X.; Dachicourt, R.; Dapena Febrer, M.; Davi Joanny, M.; Daviot, R.; De Esch, H.; Decker, J.; Decool, P.; Delaporte, P.; Delchambre, E.; Delmas, E.; Delpech, L.; Desgranges, C.; Devynck, P.; Dittmar, T.; Doceul, L.; Douai, D.; Dougnac, H.; Duchateau, J. L.; Dugué, B.; Dumas, N.; Dumont, R.; Durocher, A.; Duthoit, F. X.; Ekedahl, A.; Elbeze, D.; El Khaldi, M.; Escourbiac, F.; Faisse, F.; Falchetto, G.; Farge, M.; Farjon, J. L.; Faury, M.; Fedorczak, N.; Fenzi-Bonizec, C.; Firdaouss, M.; Frauel, Y.; Garbet, X.; Garcia, J.; Gardarein, J. L.; Gargiulo, L.; Garibaldi, P.; Gauthier, E.; Gaye, O.; Géraud, A.; Geynet, M.; Ghendrih, P.; Giacalone, I.; Gibert, S.; Gil, C.; Giruzzi, G.; Goniche, M.; Grandgirard, V.; Grisolia, C.; Gros, G.; Grosman, A.; Guigon, R.; Guilhem, D.; Guillerminet, B.; Guirlet, R.; Gunn, J.; Gurcan, O.; Hacquin, S.; Hatchressian, J. C.; Hennequin, P.; Hernandez, C.; Hertout, P.; Heuraux, S.; Hillairet, J.; Hoang, G. T.; Honore, C.; Houry, M.; Hutter, T.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Joffrin, E.; Johner, J.; Jourd'Heuil, L.; Katharria, Y. S.; Keller, D.; Kim, S. H.; Kocan, M.; Kubic, M.; Lacroix, B.; Lamaison, V.; Latu, G.; Lausenaz, Y.; Laviron, C.; Leroux, F.; Letellier, L.; Lipa, M.; Litaudon, X.; Loarer, T.; Lotte, P.; Madeleine, S.; Magaud, P.; Maget, P.; Magne, R.; Manenc, L.; Marandet, Y.; Marbach, G.; Maréchal, J. L.; Marfisi, L.; Martin, C.; Martin, G.; Martin, V.; Martinez, A.; Martins, J. P.; Masset, R.; Mazon, D.; Mellet, N.; Mercadier, L.; Merle, A.; Meshcheriakov, D.; Meyer, O.; Million, L.; Missirlian, M.; Mollard, P.; Moncada, V.; Monier-Garbet, P.; Moreau, D.; Moreau, P.; Morini, L.; Nannini, M.; Naiim Habib, M.; Nardon, E.; Nehme, H.; Nguyen, C.; Nicollet, S.; Nouilletas, R.; Ohsako, T.; Ottaviani, M.; Pamela, S.; Parrat, H.; Pastor, P.; Pecquet, A. L.; Pégourié, B.; Peysson, Y.; Porchy, I.; Portafaix, C.; Preynas, M.; Prou, M.; Raharijaona, J. M.; Ravenel, N.; Reux, C.; Reynaud, P.; Richou, M.; Roche, H.; Roubin, P.; Sabot, R.; Saint-Laurent, F.; Salasca, S.; Samaille, F.; Santagiustina, A.; Sarazin, Y.; Semerok, A.; Schlosser, J.; Schneider, M.; Schubert, M.; Schwander, F.; Ségui, J. L.; Selig, G.; Sharma, P.; Signoret, J.; Simonin, A.; Song, S.; Sonnendruker, E.; Sourbier, F.; Spuig, P.; Tamain, P.; Tena, M.; Theis, J. M.; Thouvenin, D.; Torre, A.; Travère, J. M.; Tsitrone, E.; Vallet, J. C.; Van Der Plas, E.; Vatry, A.; Verger, J. M.; Vermare, L.; Villecroze, F.; Villegas, D.; Volpe, R.; Vulliez, K.; Wagrez, J.; Wauters, T.; Zani, L.; Zarzoso, D.; Zou, X. L.
2011-09-01
Tore Supra routinely addresses the physics and technology of very long-duration plasma discharges, thus bringing precious information on critical issues of long pulse operation of ITER. A new ITER relevant lower hybrid current drive (LHCD) launcher has allowed coupling to the plasma a power level of 2.7 MW for 78 s, corresponding to a power density close to the design value foreseen for an ITER LHCD system. In accordance with the expectations, long distance (10 cm) power coupling has been obtained. Successive stationary states of the plasma current profile have been controlled in real-time featuring (i) control of sawteeth with varying plasma parameters, (ii) obtaining and sustaining a 'hot core' plasma regime, (iii) recovery from a voluntarily triggered deleterious magnetohydrodynamic regime. The scrape-off layer (SOL) parameters and power deposition have been documented during L-mode ramp-up phase, a crucial point for ITER before the X-point formation. Disruption mitigation studies have been conducted with massive gas injection, evidencing the difference between He and Ar and the possible role of the q = 2 surface in limiting the gas penetration. ICRF assisted wall conditioning in the presence of magnetic field has been investigated, culminating in the demonstration that this conditioning scheme allows one to recover normal operation after disruptions. The effect of the magnetic field ripple on the intrinsic plasma rotation has been studied, showing the competition between turbulent transport processes and ripple toroidal friction. During dedicated dimensionless experiments, the effect of varying the collisionality on turbulence wavenumber spectra has been documented, giving new insight into the turbulence mechanism. Turbulence measurements have also allowed quantitatively comparing experimental results with predictions by 5D gyrokinetic codes: numerical results simultaneously match the magnitude of effective heat diffusivity, rms values of density fluctuations and wavenumber spectra. A clear correlation between electron temperature gradient and impurity transport in the very core of the plasma has been observed, strongly suggesting the existence of a threshold above which transport is dominated by turbulent electron modes. Dynamics of edge turbulent fluctuations has been studied by correlating data from fast imaging cameras and Langmuir probes, yielding a coherent picture of transport processes involved in the SOL. Corrections were made to this article on 6 January 2012. Some of the letters in the text were missing.
NASA Astrophysics Data System (ADS)
Chen, Xiaowang; Feng, Zhipeng
2016-12-01
Planetary gearboxes are widely used in many sorts of machinery, for its large transmission ratio and high load bearing capacity in a compact structure. Their fault diagnosis relies on effective identification of fault characteristic frequencies. However, in addition to the vibration complexity caused by intricate mechanical kinematics, volatile external conditions result in time-varying running speed and/or load, and therefore nonstationary vibration signals. This usually leads to time-varying complex fault characteristics, and adds difficulty to planetary gearbox fault diagnosis. Time-frequency analysis is an effective approach to extracting the frequency components and their time variation of nonstationary signals. Nevertheless, the commonly used time-frequency analysis methods suffer from poor time-frequency resolution as well as outer and inner interferences, which hinder accurate identification of time-varying fault characteristic frequencies. Although time-frequency reassignment improves the time-frequency readability, it is essentially subject to the constraints of mono-component and symmetric time-frequency distribution about true instantaneous frequency. Hence, it is still susceptible to erroneous energy reallocation or even generates pseudo interferences, particularly for multi-component signals of highly nonlinear instantaneous frequency. In this paper, to overcome the limitations of time-frequency reassignment, we propose an improvement with fine time-frequency resolution and free from interferences for highly nonstationary multi-component signals, by exploiting the merits of iterative generalized demodulation. The signal is firstly decomposed into mono-components of constant frequency by iterative generalized demodulation. Time-frequency reassignment is then applied to each generalized demodulated mono-component, obtaining a fine time-frequency distribution. Finally, the time-frequency distribution of each signal component is restored and superposed to get the time-frequency distribution of original signal. The proposed method is validated using both numerical simulated and lab experimental planetary gearbox vibration signals. The time-varying gear fault symptoms are successfully extracted, showing effectiveness of the proposed iterative generalized time-frequency reassignment method in planetary gearbox fault diagnosis under nonstationary conditions.
Usability Evaluation of a Clinical Decision Support System for Geriatric ED Pain Treatment.
Genes, Nicholas; Kim, Min Soon; Thum, Frederick L; Rivera, Laura; Beato, Rosemary; Song, Carolyn; Soriano, Jared; Kannry, Joseph; Baumlin, Kevin; Hwang, Ula
2016-01-01
Older adults are at risk for inadequate emergency department (ED) pain care. Unrelieved acute pain is associated with poor outcomes. Clinical decision support systems (CDSS) hold promise to improve patient care, but CDSS quality varies widely, particularly when usability evaluation is not employed. To conduct an iterative usability and redesign process of a novel geriatric abdominal pain care CDSS. We hypothesized this process would result in the creation of more usable and favorable pain care interventions. Thirteen emergency physicians familiar with the Electronic Health Record (EHR) in use at the study site were recruited. Over a 10-week period, 17 1-hour usability test sessions were conducted across 3 rounds of testing. Participants were given 3 patient scenarios and provided simulated clinical care using the EHR, while interacting with the CDSS interventions. Quantitative System Usability Scores (SUS), favorability scores and qualitative narrative feedback were collected for each session. Using a multi-step review process by an interdisciplinary team, positive and negative usability issues in effectiveness, efficiency, and satisfaction were considered, prioritized and incorporated in the iterative redesign process of the CDSS. Video analysis was used to determine the appropriateness of the CDS appearances during simulated clinical care. Over the 3 rounds of usability evaluations and subsequent redesign processes, mean SUS progressively improved from 74.8 to 81.2 to 88.9; mean favorability scores improved from 3.23 to 4.29 (1 worst, 5 best). Video analysis revealed that, in the course of the iterative redesign processes, rates of physicians' acknowledgment of CDS interventions increased, however most rates of desired actions by physicians (such as more frequent pain score updates) decreased. The iterative usability redesign process was instrumental in improving the usability of the CDSS; if implemented in practice, it could improve geriatric pain care. The usability evaluation process led to improved acknowledgement and favorability. Incorporating usability testing when designing CDSS interventions for studies may be effective to enhance clinician use.
Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua
2014-01-01
The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.
Coggins, Brian E.; Werner-Allen, Jonathan W.; Yan, Anthony; Zhou, Pei
2012-01-01
In structural studies of large proteins by NMR, global fold determination plays an increasingly important role in providing a first look at a target’s topology and reducing assignment ambiguity in NOESY spectra of fully-protonated samples. In this work, we demonstrate the use of ultrasparse sampling, a new data processing algorithm, and a 4-D time-shared NOESY experiment (1) to collect all NOEs in 2H/13C/15N-labeled protein samples with selectively-protonated amide and ILV methyl groups at high resolution in only four days, and (2) to calculate global folds from this data using fully automated resonance assignment. The new algorithm, SCRUB, incorporates the CLEAN method for iterative artifact removal, but applies an additional level of iteration, permitting real signals to be distinguished from noise and allowing nearly all artifacts generated by real signals to be eliminated. In simulations with 1.2% of the data required by Nyquist sampling, SCRUB achieves a dynamic range over 10000:1 (250× better artifact suppression than CLEAN) and completely quantitative reproduction of signal intensities, volumes, and lineshapes. Applied to 4-D time-shared NOESY data, SCRUB processing dramatically reduces aliasing noise from strong diagonal signals, enabling the identification of weak NOE crosspeaks with intensities 100× less than diagonal signals. Nearly all of the expected peaks for interproton distances under 5 Å were observed. The practical benefit of this method is demonstrated with structure calculations for 23 kDa and 29 kDa test proteins using the automated assignment protocol of CYANA, in which unassigned 4-D time-shared NOESY peak lists produce accurate and well-converged global fold ensembles, whereas 3-D peak lists either fail to converge or produce significantly less accurate folds. The approach presented here succeeds with an order of magnitude less sampling than required by alternative methods for processing sparse 4-D data. PMID:22946863
NASA Astrophysics Data System (ADS)
Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.
1998-04-01
Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with
tracer, and also using experimentally acquired data with
tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for
image reconstruction).
NASA Astrophysics Data System (ADS)
Baránek, M.; Běhal, J.; Bouchal, Z.
2018-01-01
In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.
Mission of ITER and Challenges for the Young
NASA Astrophysics Data System (ADS)
Ikeda, Kaname
2009-02-01
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time. The partners in the Project—the ITER Parties—are the European Union, Japan, the People's Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER. Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER. The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.
Mission of ITER and Challenges for the Young
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikeda, Kaname
2009-02-19
It is recognized that the ongoing effort to provide sufficient energy for the wellbeing of the globe's population and to power the world economy is of the greatest importance. ITER is a joint international research and development project that aims to demonstrate the scientific and technical feasibility of fusion power. It represents the responsible actions of governments whose countries comprise over half the world's population, to create fusion power as a source of clean, economic, carbon dioxide-free energy. This is the most important science initiative of our time.The partners in the Project--the ITER Parties--are the European Union, Japan, the People'smore » Republic of China, India, the Republic of Korea, the Russian Federation and the USA. ITER will be constructed in Europe, at Cadarache in the South of France. The talk will illustrate the genesis of the ITER Organization, the ongoing work at the Cadarache site and the planned schedule for construction. There will also be an explanation of the unique aspects of international collaboration that have been developed for ITER.Although the present focus of the project is construction activities, ITER is also a major scientific and technological research program, for which the best of the world's intellectual resources is needed. Challenges for the young, imperative for fulfillment of the objective of ITER will be identified. It is important that young students and researchers worldwide recognize the rapid development of the project, and the fundamental issues that must be overcome in ITER.The talk will also cover the exciting career and fellowship opportunities for young people at the ITER Organization.« less
Parareal algorithms with local time-integrators for time fractional differential equations
NASA Astrophysics Data System (ADS)
Wu, Shu-Lin; Zhou, Tao
2018-04-01
It is challenge work to design parareal algorithms for time-fractional differential equations due to the historical effect of the fractional operator. A direct extension of the classical parareal method to such equations will lead to unbalance computational time in each process. In this work, we present an efficient parareal iteration scheme to overcome this issue, by adopting two recently developed local time-integrators for time fractional operators. In both approaches, one introduces auxiliary variables to localized the fractional operator. To this end, we propose a new strategy to perform the coarse grid correction so that the auxiliary variables and the solution variable are corrected separately in a mixed pattern. It is shown that the proposed parareal algorithm admits robust rate of convergence. Numerical examples are presented to support our conclusions.
NASA Astrophysics Data System (ADS)
Feng, Zhipeng; Chu, Fulei; Zuo, Ming J.
2011-03-01
Energy separation algorithm is good at tracking instantaneous changes in frequency and amplitude of modulated signals, but it is subject to the constraints of mono-component and narrow band. In most cases, time-varying modulated vibration signals of machinery consist of multiple components, and have so complicated instantaneous frequency trajectories on time-frequency plane that they overlap in frequency domain. For such signals, conventional filters fail to obtain mono-components of narrow band, and their rectangular decomposition of time-frequency plane may split instantaneous frequency trajectories thus resulting in information loss. Regarding the advantage of generalized demodulation method in decomposing multi-component signals into mono-components, an iterative generalized demodulation method is used as a preprocessing tool to separate signals into mono-components, so as to satisfy the requirements by energy separation algorithm. By this improvement, energy separation algorithm can be generalized to a broad range of signals, as long as the instantaneous frequency trajectories of signal components do not intersect on time-frequency plane. Due to the good adaptability of energy separation algorithm to instantaneous changes in signals and the mono-component decomposition nature of generalized demodulation, the derived time-frequency energy distribution has fine resolution and is free from cross term interferences. The good performance of the proposed time-frequency analysis is illustrated by analyses of a simulated signal and the on-site recorded nonstationary vibration signal of a hydroturbine rotor during a shut-down transient process, showing that it has potential to analyze time-varying modulated signals of multi-components.
Coherent and Noncoherent Joint Processing of Sonar for Detection of Small Targets in Shallow Water.
Pan, Xiang; Jiang, Jingning; Li, Si; Ding, Zhenping; Pan, Chen; Gong, Xianyi
2018-04-10
A coherent-noncoherent joint processing framework is proposed for active sonar to combine diversity gain and beamforming gain for detection of a small target in shallow water environments. Sonar utilizes widely-spaced arrays to sense environments and illuminate a target of interest from multiple angles. Meanwhile, it exploits spatial diversity for time-reversal focusing to suppress reverberation, mainly strong bottom reverberation. For enhancement of robustness of time-reversal focusing, an adaptive iterative strategy is utilized in the processing framework. A probing signal is firstly transmitted and echoes of a likely target are utilized as steering vectors for the second transmission. With spatial diversity, target bearing and range are estimated using a broadband signal model. Numerical simulations show that the novel sonar outperforms the traditional phased-array sonar due to benefits of spatial diversity. The effectiveness of the proposed framework has been validated by localization of a small target in at-lake experiments.
Mapping CMMI Level 2 to Scrum Practices: An Experience Report
NASA Astrophysics Data System (ADS)
Diaz, Jessica; Garbajosa, Juan; Calvo-Manzano, Jose A.
CMMI has been adopted advantageously in large companies for improvements in software quality, budget fulfilling, and customer satisfaction. However SPI strategies based on CMMI-DEV require heavy software development processes and large investments in terms of cost and time that medium/small companies do not deal with. The so-called light software development processes, such as Agile Software Development (ASD), deal with these challenges. ASD welcomes changing requirements and stresses the importance of adaptive planning, simplicity and continuous delivery of valuable software by short time-framed iterations. ASD is becoming convenient in a more and more global, and changing software market. It would be greatly useful to be able to introduce agile methods such as Scrum in compliance with CMMI process model. This paper intends to increase the understanding of the relationship between ASD and CMMI-DEV reporting empirical results that confirm theoretical comparisons between ASD practices and CMMI level2.
A Biopsychosocial Model of the Development of Chronic Conduct Problems in Adolescence
Dodge, Kenneth A.; Pettit, Gregory S.
2009-01-01
A biopsychosocial model of the development of adolescent chronic conduct problems is presented and supported through a review of empirical findings. This model posits that biological dispositions and sociocultural contexts place certain children at risk in early life but that life experiences with parents, peers, and social institutions increment and mediate this risk. A transactional developmental model is best equipped to describe the emergence of chronic antisocial behavior across time. Reciprocal influences among dispositions, contexts, and life experiences lead to recursive iterations across time that exacerbate or diminish antisocial development. Cognitive and emotional processes within the child, including the acquisition of knowledge and social-information-processing patterns, mediate the relation between life experiences and conduct problem outcomes. Implications for prevention research and public policy are noted. PMID:12661890
First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems
2014-03-01
accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines
Gibbons, Steven J.; Kvaerna, Tormod; Harris, David B.; ...
2016-06-08
We report aftershock sequences following very large earthquakes present enormous challenges to near-real-time generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase-association algorithms and a significant deterioration in the quality of underlying, fully automatic event bulletins. Current processing pipelines were designed a generation ago, and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams thatmore » are then scanned by a phase-association algorithm to form event hypotheses. We consider the scenario in which a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located, using a separate specially targeted semiautomatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid-search algorithm that may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase-association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove about half of the original detections that could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Lastly, further reductions in the number of detections in the parametric data streams are likely, using correlation and subspace detectors and/or empirical matched field processing.« less
Systematic development of technical textiles
NASA Astrophysics Data System (ADS)
Beer, M.; Schrank, V.; Gloy, Y.-S.; Gries, T.
2016-07-01
Technical textiles are used in various fields of applications, ranging from small scale (e.g. medical applications) to large scale products (e.g. aerospace applications). The development of new products is often complex and time consuming, due to multiple interacting parameters. These interacting parameters are production process related and also a result of the textile structure and used material. A huge number of iteration steps are necessary to adjust the process parameter to finalize the new fabric structure. A design method is developed to support the systematic development of technical textiles and to reduce iteration steps. The design method is subdivided into six steps, starting from the identification of the requirements. The fabric characteristics vary depending on the field of application. If possible, benchmarks are tested. A suitable fabric production technology needs to be selected. The aim of the method is to support a development team within the technology selection without restricting the textile developer. After a suitable technology is selected, the transformation and correlation between input and output parameters follows. This generates the information for the production of the structure. Afterwards, the first prototype can be produced and tested. The resulting characteristics are compared with the initial product requirements.
Ratmansky, Motti; Minerbi, Amir; Kalichman, Leonid; Kent, John; Wende, Osnat; Finestone, Aharon S; Vulfsons, Simon
2017-04-01
To develop consensus on a position paper on the use of intramuscular stimulation (IMS) for the treatment of myofascial pain syndrome (MPS) by physicians in Israel. The Israeli Society of Musculoskeletal Medicine ran a modified Delphi process to gather opinions from a multidisciplinary expert panel. Eight experts in the treatment of MPS were chosen and asked to participate, and six participated. The position paper was iterated three times. After three iterations, general consensus was reached by all six experts. The general statement that was agreed on was: "IMS is one of the preferred treatments for myofascial pain syndrome. The treatment is evidence-based, effective, safe, and inexpensive. The position of the Israeli Society of Musculoskeletal Medicine is that the treatment should be taught and used by all primary care physicians and those physicians in other areas of medicine who deal with pain in their work." The position paper is a basis for clinical work and education programs for physicians interested in a better understanding and ability to treat patients with a musculoskeletal complaint or manifestation of disease. © 2016 World Institute of Pain.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
Silicon Based Mid Infrared SiGeSn Heterostructure Emitters and Detectors
2016-05-16
have investigated the surface plasmon enhancement of the GeSn p-i-n photodiode using gold metal nanostructures. We have conducted numerical...simulation of the plasmonic structure of 2D nano-hole array to tune the surface plasmon resonance into the absorption range of the GeSn active layer. Such a...diode can indeed be enhanced with the plasmonic structure on top. Within the time span of this project, we have completed one iteration of the process
An improved parallel fuzzy connected image segmentation method based on CUDA.
Wang, Liansheng; Li, Dong; Huang, Shaohui
2016-05-12
Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.
NASA Technical Reports Server (NTRS)
Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.
1998-01-01
This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
The role of graphics super-workstations in a supercomputing environment
NASA Technical Reports Server (NTRS)
Levin, E.
1989-01-01
A new class of very powerful workstations has recently become available which integrate near supercomputer computational performance with very powerful and high quality graphics capability. These graphics super-workstations are expected to play an increasingly important role in providing an enhanced environment for supercomputer users. Their potential uses include: off-loading the supercomputer (by serving as stand-alone processors, by post-processing of the output of supercomputer calculations, and by distributed or shared processing), scientific visualization (understanding of results, communication of results), and by real time interaction with the supercomputer (to steer an iterative computation, to abort a bad run, or to explore and develop new algorithms).
Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi
2015-09-01
Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Hardware for dynamic quantum computing.
Ryan, Colm A; Johnson, Blake R; Ristè, Diego; Donovan, Brian; Ohki, Thomas A
2017-10-01
We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fed back or fed forward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout, we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow in a fraction of superconducting qubit coherence times. Both readout and control platforms make extensive use of field programmable gate arrays to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
ERIC Educational Resources Information Center
Rodriguez, Gabriel R.
2017-01-01
A growing number of schools are implementing PLCs to address school improvement, staff engage with data to identify student needs and determine instructional interventions. This is a starting point for engaging in the iterative process of learning for the teach in order to increase student learning (Hord & Sommers, 2008). The iterative process…
Evaluating the iterative development of VR/AR human factors tools for manual work.
Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna
2012-01-01
This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.
An adaptive Gaussian process-based iterative ensemble smoother for data assimilation
NASA Astrophysics Data System (ADS)
Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao
2018-05-01
Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.
Investigation of Physical Processes Limiting Plasma Density in DIII--D
NASA Astrophysics Data System (ADS)
Maingi, R.
1996-11-01
Understanding the physical processes which limit operating density is crucial in achieving peak performance in confined plasmas. Studies from many of the world's tokamaks have indicated the existence(M. Greenwald, et al., Nucl. Fusion 28) (1988) 2199 of an operational density limit (Greenwald limit, n^GW_max) which is proportional to the plasma current and independent of heating power. Several theories have reproduced the current dependence, but the lack of a heating power dependence in the data has presented an enigma. This limit impacts the International Thermonuclear Experimental Reactor (ITER) because the nominal operating density for ITER is 1.5 × n^GW_max. In DIII-D, experiments are being conducted to understand the physical processes which limit operating density in H-mode discharges; these processes include X-point MARFE formation, high core recycling and neutral pressure, resistive MHD stability, and core radiative collapse. These processes affect plasma properties, i.e. edge/scrape-off layer conduction and radiation, edge pressure gradient and plasma current density profile, and core radiation, which in turn restrict the accessible density regime. With divertor pumping and D2 pellet fueling, core neutral pressure is reduced and X-point MARFE formation is effectively eliminated. Injection of the largest-sized pellets does cause transient formation of divertor MARFEs which occasionally migrate to the X-point, but these are rapidly extinguished in pumped discharges in the time between pellets. In contrast to Greenwald et al., it is found that the density relaxation time after pellets is largely independent of the density relative to the Greenwald limit. Fourier analysis of Mirnov oscillations indicates the de-stabilization and growth of rotating, tearing-type modes (m/n= 2/1) when the injected pellets cause large density perturbations, and these modes often reduce energy confinement back to L-mode levels. We are examining the mechanisms for de-stabilization of the mode, the primary ones being neo-classical pressure gradient drivers. Discharges with a gradual density increase are often free of large amplitude tearing modes, allowing access to the highest density regimes in which off-axis beam deposition can lead to core radiative collapse, i.e. a central power balance limit. The highest achieved barne was 1.5 × n^GW_max with τ_E/τ_E^JET-DIII-D >= 0.9. The highest density obtained in L-mode discharges was 3 × n^GW_max. Implications of these results for ITER will be discussed.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Stålberg, Anna; Sandberg, Anette; Söderbäck, Maja; Larsson, Thomas
2016-06-01
During the last decade, interactive technology has entered mainstream society. Its many users also include children, even the youngest ones, who use the technology in different situations for both fun and learning. When designing technology for children, it is crucial to involve children in the process in order to arrive at an age-appropriate end product. In this study we describe the specific iterative process by which an interactive application was developed. This application is intended to facilitate young children's, three-to five years old, participation in healthcare situations. We also describe the specific contributions of the children, who tested the prototypes in a preschool, a primary health care clinic and an outpatient unit at a hospital, during the development process. The iterative phases enabled the children to be involved at different stages of the process and to evaluate modifications and improvements made after each prior iteration. The children contributed their own perspectives (the child's perspective) on the usability, content and graphic design of the application, substantially improving the software and resulting in an age-appropriate product. Copyright © 2016 Elsevier Inc. All rights reserved.
Iterative dip-steering median filter
NASA Astrophysics Data System (ADS)
Huo, Shoudong; Zhu, Weihong; Shi, Taikun
2017-09-01
Seismic data are always contaminated with high noise components, which present processing challenges especially for signal preservation and its true amplitude response. This paper deals with an extension of the conventional median filter, which is widely used in random noise attenuation. It is known that the standard median filter works well with laterally aligned coherent events but cannot handle steep events, especially events with conflicting dips. In this paper, an iterative dip-steering median filter is proposed for the attenuation of random noise in the presence of multiple dips. The filter first identifies the dominant dips inside an optimized processing window by a Fourier-radial transform in the frequency-wavenumber domain. The optimum size of the processing window depends on the intensity of random noise that needs to be attenuated and the amount of signal to be preserved. It then applies median filter along the dominant dip and retains the signals. Iterations are adopted to process the residual signals along the remaining dominant dips in a descending sequence, until all signals have been retained. The method is tested by both synthetic and field data gathers and also compared with the commonly used f-k least squares de-noising and f-x deconvolution.
Holland, Alexander; Aboy, Mateo
2009-07-01
We present a novel method to iteratively calculate discrete Fourier transforms for discrete time signals with sample time intervals that may be widely nonuniform. The proposed recursive Fourier transform (RFT) does not require interpolation of the samples to uniform time intervals, and each iterative transform update of N frequencies has computational order N. Because of the inherent non-uniformity in the time between successive heart beats, an application particularly well suited for this transform is power spectral density (PSD) estimation for heart rate variability. We compare RFT based spectrum estimation with Lomb-Scargle Transform (LST) based estimation. PSD estimation based on the LST also does not require uniform time samples, but the LST has a computational order greater than Nlog(N). We conducted an assessment study involving the analysis of quasi-stationary signals with various levels of randomly missing heart beats. Our results indicate that the RFT leads to comparable estimation performance to the LST with significantly less computational overhead and complexity for applications requiring iterative spectrum estimations.
Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-01-01
Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858
Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia
2017-03-17
To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar. Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.
Schlosser, Danielle; Campellone, Timothy; Kim, Daniel; Truong, Brandy; Vergani, Silvia; Ward, Charlie; Vinogradov, Sophia
2016-04-28
Despite improvements in treating psychosis, schizophrenia remains a chronic and debilitating disorder that affects approximately 1% of the US population and costs society more than depression, dementia, and other medical illnesses across most of the lifespan. Improving functioning early in the course of illness could have significant implications for long-term outcome of individuals with schizophrenia. Yet, current gold-standard treatments do not lead to clinically meaningful improvements in outcome, partly due to the inherent challenges of treating a population with significant cognitive and motivational impairments. The rise of technology presents an opportunity to develop novel treatments that may circumvent the motivational and cognitive challenges observed in schizophrenia. The purpose of this study was two-fold: (1) to evaluate the feasibility and acceptability of implementing a Personalized Real-Time Intervention for Motivation Enhancement (PRIME), a mobile app intervention designed to target reward-processing impairments, enhance motivation, and thereby improve quality of life in recent onset schizophrenia, and (2) evaluate the empirical benefits of using an iterative, user-centered design (UCD) process. We conducted two design workshops with 15 key stakeholders, followed by a series of in-depth interviews in collaboration with IDEO, a design and innovation firm. The UCD approach ultimately resulted in the first iteration of PRIME, which was evaluated by 10 RO participants. Results from the Stage 1 participants were then used to guide the next iteration that is currently being evaluated in an ongoing RCT. Participants in both phases were encouraged to use the app daily with a minimum frequency of 1/week over a 12-week period. The UCD process resulted in the following feature set: (1) delivery of text message (short message service, SMS)-based motivational coaching from trained therapists, (2) individualized goal setting in prognostically important psychosocial domains, (3) social networking via direct peer-to-peer messaging, and (4) community "moments feed" to capture and reinforce rewarding experiences and goal achievements. Users preferred an experience that highlighted several of the principles of self-determination theory, including the desire for more control of their future (autonomy and competence) and an approach that helps them improve existing relationships (relatedness). IDEO, also recommended an approach that was casual, friendly, and nonstigmatizing, which is in line with the recovery model of psychosis. After 12-weeks of using PRIME, participants used the app, on average, every other day, were actively engaged with its various features each time they logged in and retention and satisfaction was high (20/20, 100% retention, high satisfaction ratings). The iterative design process lead to a 2- to 3-fold increase in engagement from Stage 1 to Stage 2 in almost each aspect of the platform. These results indicate that the neuroscience-informed mobile app, PRIME, is a feasible and acceptable intervention for young people with schizophrenia.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Preconditioned conjugate-gradient methods for low-speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations is integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the Lower-Upper Successive Symmetric Over-Relaxation iterative scheme is more efficient than a preconditioner based on Incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional Line Gauss-Seidel Relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
Preconditioned Conjugate Gradient methods for low speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
Dealing with gene expression missing data.
Brás, L P; Menezes, J C
2006-05-01
Compared evaluation of different methods is presented for estimating missing values in microarray data: weighted K-nearest neighbours imputation (KNNimpute), regression-based methods such as local least squares imputation (LLSimpute) and partial least squares imputation (PLSimpute) and Bayesian principal component analysis (BPCA). The influence in prediction accuracy of some factors, such as methods' parameters, type of data relationships used in the estimation process (i.e. row-wise, column-wise or both), missing rate and pattern and type of experiment [time series (TS), non-time series (NTS) or mixed (MIX) experiments] is elucidated. Improvements based on the iterative use of data (iterative LLS and PLS imputation--ILLSimpute and IPLSimpute), the need to perform initial imputations (modified PLS and Helland PLS imputation--MPLSimpute and HPLSimpute) and the type of relationships employed (KNNarray, LLSarray, HPLSarray and alternating PLS--APLSimpute) are proposed. Overall, it is shown that data set properties (type of experiment, missing rate and pattern) affect the data similarity structure, therefore influencing the methods' performance. LLSimpute and ILLSimpute are preferable in the presence of data with a stronger similarity structure (TS and MIX experiments), whereas PLS-based methods (MPLSimpute, IPLSimpute and APLSimpute) are preferable when estimating NTS missing data.
Inferring the Presence of Reverse Proxies Through Timing Analysis
2015-06-01
16 Figure 3.2 The three different instances of timing measurement configurations 17 Figure 3.3 Permutation of a web request iteration...Their data showed that they could detect at least 6 bits of entropy between unlike devices and that it was enough to determine that they are in fact...depending on the permutation being executed so that every iteration was conducted under the same distance 15 City Lat Long City Lat Long
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Defining conservation targets on a landscape-scale
Benscoter, A.M.; Romañach, Stephanie; Brandt, Laura A.
2015-01-01
Conservation planning, the process of deciding how to protect, conserve, enhance and(or) minimize loss of natural and cultural resources, is a fundamental process to achieve conservation success in a time of rapid environmental change. Conservation targets, the measurable expressions of desired resource conditions, are an important tool in biological planning to achieve effective outcomes. Conservation targets provide a focus for planning, design, conservation action, and collaborative monitoring of environmental trends to guide landscape-scale conservation to improve the quality and quantity of key ecological and cultural resources. It is essential to have an iterative and inclusive method to define conservation targets that is replicable and allows for the evaluation of the effectiveness of conservation targets over time. In this document, we describe a process that can be implemented to achieve landscape-scale conservation, which includes defining conservation targets. We also describe what has been accomplished to date (September 2015) through this process for the Peninsular Florida Landscape Conservation Cooperative (PFLCC).
Method of up-front load balancing for local memory parallel processors
NASA Technical Reports Server (NTRS)
Baffes, Paul Thomas (Inventor)
1990-01-01
In a parallel processing computer system with multiple processing units and shared memory, a method is disclosed for uniformly balancing the aggregate computational load in, and utilizing minimal memory by, a network having identical computations to be executed at each connection therein. Read-only and read-write memory are subdivided into a plurality of process sets, which function like artificial processing units. Said plurality of process sets is iteratively merged and reduced to the number of processing units without exceeding the balance load. Said merger is based upon the value of a partition threshold, which is a measure of the memory utilization. The turnaround time and memory savings of the instant method are functions of the number of processing units available and the number of partitions into which the memory is subdivided. Typical results of the preferred embodiment yielded memory savings of from sixty to seventy five percent.
Scheduling and rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Optimization of tomographic reconstruction workflows on geographically distributed resources
Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.
2016-01-01
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149
Optimization of tomographic reconstruction workflows on geographically distributed resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar
New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less
Practical Use of Operation Data in the Process Industry
NASA Astrophysics Data System (ADS)
Kano, Manabu
This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.
A prototype of an automated high resolution InSAR volcano-monitoring system in the MED-SUV project
NASA Astrophysics Data System (ADS)
Chowdhury, Tanvir A.; Minet, Christian; Fritz, Thomas
2016-04-01
Volcanic processes which produce a variety of geological and hydrological hazards are difficult to predict and capable of triggering natural disasters on regional to global scales. Therefore it is important to monitor volcano continuously and with a high spatial and temporal sampling rate. The monitoring of active volcanoes requires the reliable measurement of surface deformation before, during and after volcanic activities and it helps for the better understanding and modelling of the involved geophysical processes. Space-borne synthetic aperture radar (SAR) interferometry (InSAR), persistent scatterer interferometry (PSI) and small baseline subset algorithm (SBAS) provide a powerful tool for observing the eruptive activities and measuring the surface changes of millimetre accuracy. All the mentioned techniques with deformation time series extraction address the challenges by exploiting medium to large SAR image stacks. The process of selecting, ordering, downloading, storing, logging, extracting and preparing the data for processing is very time consuming has to be done manually for every single data-stack. In many cases it is even an iterative process which has to be done regularly and continuously. Therefore, data processing becomes slow which causes significant delays in data delivery. The SAR Satellite based High Resolution Data Acquisition System, which will be developed at DLR, will automate this entire time consuming tasks and allows an operational volcano monitoring system. Every 24 hours the system runs for searching new acquired scene over the volcanoes and keeps track of the data orders, log the status and download the provided data via ftp-transfer including E-Mail alert. Furthermore, the system will deliver specified reports and maps to a database for review and use by specialists. The user interaction will be minimized and iterative processes will be totally avoided. In this presentation, a prototype of SAR Satellite based High Resolution Data Acquisition System, which is developed and operated by DLR, will be described in detail. The workflow of the developed system is described which allow a meaningful contribution of SAR for monitoring volcanic eruptive activities. A more robust and efficient InSAR data processing in IWAP processor will be introduced in the framework of a remote sensing task of MED-SUV project. An application of the developed prototype system to a historic eruption of Mount Etna and Piton de la Fournaise will be depicted in the last part of the presentation.
Progressing in cable-in-conduit for fusion magnets: from ITER to low cost, high performance DEMO
NASA Astrophysics Data System (ADS)
Uglietti, D.; Sedlak, K.; Wesche, R.; Bruzzone, P.; Muzzi, L.; della Corte, A.
2018-05-01
The performance of ITER toroidal field (TF) conductors still have a significant margin for improvement because the effective strain between ‑0.62% and ‑0.95% limits the strands’ critical current between 15% and 45% of the maximum achievable. Prototype Nb3Sn cable-in-conduit conductors have been designed, manufactured and tested in the frame of the EUROfusion DEMO activities. In these conductors the effective strain has shown a clear improvement with respect to the ITER conductors, reaching values between ‑0.55% and ‑0.28%, resulting in a strand critical current which is two to three times higher than in ITER conductors. In terms of the amount of Nb3Sn strand required for the construction of the DEMO TF magnet system, such improvement may lead to a reduction of at least a factor of two with respect to a similar magnet built with ITER type conductors; a further saving of Nb3Sn is possible if graded conductors/windings are employed. In the best case the DEMO TF magnet could require fewer Nb3Sn strands than the ITER one, despite the larger size of DEMO. Moreover high performance conductors could be operated at higher fields than ITER TF conductors, enabling the construction of low cost, compact, high field tokamaks.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanford, M.
1997-12-31
Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
NASA Astrophysics Data System (ADS)
Landman, I. S.; Bazylev, B. N.; Garkusha, I. E.; Loarte, A.; Pestchanyi, S. E.; Safronov, V. M.
2005-03-01
For ITER, the potential material damage of plasma facing tungsten-, CFC-, or beryllium components during transient processes such as ELMs or mitigated disruptions are simulated numerically using the MHD code FOREV-2D and the melt motion code MEMOS-1.5D for a heat deposition in the range of 0.5-3 MJ/m 2 on the time scale of 0.1-1 ms. Such loads can cause significant evaporation at the target surface and a contamination of the SOL by the ions of evaporated material. Results are presented on carbon plasma dynamics in toroidal geometry and on radiation fluxes from the SOL carbon ions obtained with FOREV-2D. The validation of MEMOS-1.5D against the plasma gun tokamak simulators MK-200UG and QSPA-Kh50, based on the tungsten melting threshold, is described. Simulations with MEMOS-1.5D for a beryllium first wall that provide important details about the melt motion dynamics and typical features of the damage are reported.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
NASA Astrophysics Data System (ADS)
Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.
2017-11-01
In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.
Traffic Aware Planner for Cockpit-Based Trajectory Optimization
NASA Technical Reports Server (NTRS)
Woods, Sharon E.; Vivona, Robert A.; Henderson, Jeffrey; Wing, David J.; Burke, Kelly A.
2016-01-01
The Traffic Aware Planner (TAP) software application is a cockpit-based advisory tool designed to be hosted on an Electronic Flight Bag and to enable and test the NASA concept of Traffic Aware Strategic Aircrew Requests (TASAR). The TASAR concept provides pilots with optimized route changes (including altitude) that reduce fuel burn and/or flight time, avoid interactions with known traffic, weather and restricted airspace, and may be used by the pilots to request a route and/or altitude change from Air Traffic Control. Developed using an iterative process, TAP's latest improvements include human-machine interface design upgrades and added functionality based on the results of human-in-the-loop simulation experiments and flight trials. Architectural improvements have been implemented to prepare the system for operational-use trials with partner commercial airlines. Future iterations will enhance coordination with airline dispatch and add functionality to improve the acceptability of TAP-generated route-change requests to pilots, dispatchers, and air traffic controllers.
Neutron spectroscopy as a fuel ion ratio diagnostic: lessons from JET and prospects for ITER.
Ericsson, G; Conroy, S; Gatu Johnson, M; Andersson Sundén, E; Cecconello, M; Eriksson, J; Hellesen, C; Sangaroon, S; Weiszflog, M
2010-10-01
The determination of the fuel ion ratio n(t)/n(d) in ITER is required at a precision of 20%, time resolution of 100 ms, spatial resolution of a/10, and over a range of 0.01
NASA Astrophysics Data System (ADS)
Terando, A. J.; Lascurain, A.; Aldridge, H. D.; Davis, C.
2016-12-01
Climate Voyager provides an innovative way to visualize both large-scale and local climate change projections using a three-map layout and time series plot. This product includes a suite of tools designed to assist with climate risk and opportunity assessments, including changes in average seasonal conditions and the capability to evaluate a variety of different decision-relevant thresholds (e.g. changes in extreme temperature occurrence). Each tool summarizes output from 20 downscaled global climate models and contains a historical average for comparison with the spread of projected future outcomes. The Climate Voyager website is interactive, allowing users to explore both regional and location-specific guidance for two Representative Concentration Pathways (RCPs) and four future 20-year time periods. By presenting climate model projections and measures of uncertainty of specific parameters beyond just annual temperatures and precipitation, Climate Voyager can help a wide variety of decision makers plan for climate changes that may affect them. We present a case study in which a new module was developed within Climate Voyager for use by Tribes and native communities in the eastern U.S. to help make informed resource decisions. In this first attempt, Ramps (Allium tricoccum), a plant species of great cultural significance, was incorporated through consultation with the tribal organization. We will also discuss the process of engagement employed with end-users and the potential to make the Climate Voyager interface an iterative, co-produced process to enhance the usability of climate model information for adaptation planning.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
2010-02-24
electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Empirical OPC rule inference for rapid RET application
NASA Astrophysics Data System (ADS)
Kulkarni, Anand P.
2006-10-01
A given technological node (45 nm, 65 nm) can be expected to process thousands of individual designs. Iterative methods applied at the node consume valuable days in determining proper placement of OPC features, and manufacturing and testing mask correspondence to wafer patterns in a trial-and-error fashion for each design. Repeating this fabrication process for each individual design is a time-consuming and expensive process. We present a novel technique which sidesteps the requirement to iterate through the model-based OPC analysis and pattern verification cycle on subsequent designs at the same node. Our approach relies on the inference of rules from a correct pattern at the wafer surface it relates to the OPC and pre-OPC pattern layout files. We begin with an offline phase where we obtain a "gold standard" design file that has been fab-tested at the node with a prepared, post-OPC layout file that corresponds to the intended on-wafer pattern. We then run an offline analysis to infer rules to be used in this method. During the analysis, our method implicitly identifies contextual OPC strategies for optimal placement of RET features on any design at that node. Using these strategies, we can apply OPC to subsequent designs at the same node with accuracy comparable to the original design file but significantly smaller expected runtimes. The technique promises to offer a rapid and accurate complement to existing RET application strategies.
A numerical scheme to solve unstable boundary value problems
NASA Technical Reports Server (NTRS)
Kalnay Derivas, E.
1975-01-01
A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manoli, Gabriele, E-mail: manoli@dmsa.unipd.it; Nicholas School of the Environment, Duke University, Durham, NC 27708; Rossi, Matteo
The modeling of unsaturated groundwater flow is affected by a high degree of uncertainty related to both measurement and model errors. Geophysical methods such as Electrical Resistivity Tomography (ERT) can provide useful indirect information on the hydrological processes occurring in the vadose zone. In this paper, we propose and test an iterated particle filter method to solve the coupled hydrogeophysical inverse problem. We focus on an infiltration test monitored by time-lapse ERT and modeled using Richards equation. The goal is to identify hydrological model parameters from ERT electrical potential measurements. Traditional uncoupled inversion relies on the solution of two sequentialmore » inverse problems, the first one applied to the ERT measurements, the second one to Richards equation. This approach does not ensure an accurate quantitative description of the physical state, typically violating mass balance. To avoid one of these two inversions and incorporate in the process more physical simulation constraints, we cast the problem within the framework of a SIR (Sequential Importance Resampling) data assimilation approach that uses a Richards equation solver to model the hydrological dynamics and a forward ERT simulator combined with Archie's law to serve as measurement model. ERT observations are then used to update the state of the system as well as to estimate the model parameters and their posterior distribution. The limitations of the traditional sequential Bayesian approach are investigated and an innovative iterative approach is proposed to estimate the model parameters with high accuracy. The numerical properties of the developed algorithm are verified on both homogeneous and heterogeneous synthetic test cases based on a real-world field experiment.« less
The SOFIA Mission Control System Software
NASA Astrophysics Data System (ADS)
Heiligman, G. M.; Brock, D. R.; Culp, S. D.; Decker, P. H.; Estrada, J. C.; Graybeal, J. B.; Nichols, D. M.; Paluzzi, P. R.; Sharer, P. J.; Pampell, R. J.; Papke, B. L.; Salovich, R. D.; Schlappe, S. B.; Spriestersbach, K. K.; Webb, G. L.
1999-05-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) will be delivered with a computerized mission control system (MCS). The MCS communicates with the aircraft's flight management system and coordinates the operations of the telescope assembly, mission-specific subsystems, and the science instruments. The software for the MCS must be reliable and flexible. It must be easily usable by many teams of observers with widely differing needs, and it must support non-intrusive access for education and public outreach. The technology must be appropriate for SOFIA's 20-year lifetime. The MCS software development process is an object-oriented, use case driven approach. The process is iterative: delivery will be phased over four "builds"; each build will be the result of many iterations; and each iteration will include analysis, design, implementation, and test activities. The team is geographically distributed, coordinating its work via Web pages, teleconferences, T.120 remote collaboration, and CVS (for Internet-enabled configuration management). The MCS software architectural design is derived in part from other observatories' experience. Some important features of the MCS are: * distributed computing over several UNIX and VxWorks computers * fast throughput of time-critical data * use of third-party components, such as the Adaptive Communications Environment (ACE) and the Common Object Request Broker Architecture (CORBA) * extensive configurability via stored, editable configuration files * use of several computer languages so developers have "the right tool for the job". C++, Java, scripting languages, Interactive Data Language (from Research Systems, Int'l.), XML, and HTML will all be used in the final deliverables. This paper reports on work in progress, with the final product scheduled for delivery in 2001. This work was performed for Universities Space Research Association for NASA under contract NAS2-97001.
Iterative outlier removal: A method for identifying outliers in laboratory recalibration studies
Parrinello, Christina M.; Grams, Morgan E.; Sang, Yingying; Couper, David; Wruck, Lisa M.; Li, Danni; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Extreme values that arise for any reason, including through non-laboratory measurement procedure-related processes (inadequate mixing, evaporation, mislabeling), lead to outliers and inflate errors in recalibration studies. We present an approach termed iterative outlier removal (IOR) for identifying such outliers. Methods We previously identified substantial laboratory drift in uric acid measurements in the Atherosclerosis Risk in Communities (ARIC) Study over time. Serum uric acid was originally measured in 1990–92 on a Coulter DACOS instrument using an uricase-based measurement procedure. To recalibrate previous measured concentrations to a newer enzymatic colorimetric measurement procedure, uric acid was re-measured in 200 participants from stored plasma in 2011–13 on a Beckman Olympus 480 autoanalyzer. To conduct IOR, we excluded data points >3 standard deviations (SDs) from the mean difference. We continued this process using the resulting data until no outliers remained. Results IOR detected more outliers and yielded greater precision in simulation. The original mean difference (SD) in uric acid was 1.25 (0.62) mg/dL. After four iterations, 9 outliers were excluded, and the mean difference (SD) was 1.23 (0.45) mg/dL. Conducting only one round of outlier removal (standard approach) would have excluded 4 outliers (mean difference [SD] = 1.22 [0.51] mg/dL). Applying the recalibration (derived from Deming regression) from each approach to the original measurements, the prevalence of hyperuricemia (>7 mg/dL) was 28.5% before IOR and 8.5% after IOR. Conclusion IOR is a useful method for removal of extreme outliers irrelevant to recalibrating laboratory measurements, and identifies more extraneous outliers than the standard approach. PMID:27197675
Mehl, Steffen W.; Hill, Mary C.
2013-01-01
This report documents the addition of ghost node Local Grid Refinement (LGR2) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference groundwater flow model. LGR2 provides the capability to simulate groundwater flow using multiple block-shaped higher-resolution local grids (a child model) within a coarser-grid parent model. LGR2 accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the grid-refinement interface boundary. LGR2 can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems. Traditional one-way coupled telescopic mesh refinement methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled ghost-node method of LGR2 provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR2, evaluates accuracy and performance for two-and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH2) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR2.
Mehl, Steffen W.; Hill, Mary C.
2006-01-01
This report documents the addition of shared node Local Grid Refinement (LGR) to MODFLOW-2005, the U.S. Geological Survey modular, transient, three-dimensional, finite-difference ground-water flow model. LGR provides the capability to simulate ground-water flow using one block-shaped higher-resolution local grid (a child model) within a coarser-grid parent model. LGR accomplishes this by iteratively coupling two separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundary. LGR can be used in two-and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined ground-water systems. Traditional one-way coupled telescopic mesh refinement (TMR) methods can have large, often undetected, inconsistencies in heads and fluxes across the interface between two model grids. The iteratively coupled shared-node method of LGR provides a more rigorous coupling in which the solution accuracy is controlled by convergence criteria defined by the user. In realistic problems, this can result in substantially more accurate solutions and require an increase in computer processing time. The rigorous coupling enables sensitivity analysis, parameter estimation, and uncertainty analysis that reflects conditions in both model grids. This report describes the method used by LGR, evaluates LGR accuracy and performance for two- and three-dimensional test cases, provides input instructions, and lists selected input and output files for an example problem. It also presents the Boundary Flow and Head (BFH) Package, which allows the child and parent models to be simulated independently using the boundary conditions obtained through the iterative process of LGR.
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Improving performances of suboptimal greedy iterative biclustering heuristics via localization.
Erten, Cesim; Sözdinler, Melih
2010-10-15
Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Bittner-Rohrhofer, K.; Humer, K.; Weber, H. W.
The windings of the superconducting magnet coils for the ITER-FEAT fusion device are affected by high mechanical stresses at cryogenic temperatures and by a radiation environment, which impose certain constraints especially on the insulating materials. A glass fiber reinforced plastic (GFRP) laminate, which consists of Kapton/R-glass-fiber reinforcement tapes, vacuum-impregnated in a DGEBA epoxy system, was used for the European toroidal field model coil turn insulation of ITER. In order to assess its mechanical properties under the actual operating conditions of ITER-FEAT, cryogenic (77 K) static tensile tests and tension-tension fatigue measurements were done before and after irradiation to a fast neutron fluence of 1×10 22 m -2 ( E>0.1 MeV), i.e. the ITER-FEAT design fluence level. We find that the mechanical strength and the fracture behavior of this GFRP are strongly influenced by the winding direction of the tape and by the radiation induced delamination process. In addition, the composite swells by 3%, forming bubbles inside the laminate, and loses weight (1.4%) at the design fluence.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Differential Characteristics Based Iterative Multiuser Detection for Wireless Sensor Networks
Chen, Xiaoguang; Jiang, Xu; Wu, Zhilu; Zhuang, Shufeng
2017-01-01
High throughput, low latency and reliable communication has always been a hot topic for wireless sensor networks (WSNs) in various applications. Multiuser detection is widely used to suppress the bad effect of multiple access interference in WSNs. In this paper, a novel multiuser detection method based on differential characteristics is proposed to suppress multiple access interference. The proposed iterative receive method consists of three stages. Firstly, a differential characteristics function is presented based on the optimal multiuser detection decision function; then on the basis of differential characteristics, a preliminary threshold detection is utilized to find the potential wrongly received bits; after that an error bit corrector is employed to correct the wrong bits. In order to further lower the bit error ratio (BER), the differential characteristics calculation, threshold detection and error bit correction process described above are iteratively executed. Simulation results show that after only a few iterations the proposed multiuser detection method can achieve satisfactory BER performance. Besides, BER and near far resistance performance are much better than traditional suboptimal multiuser detection methods. Furthermore, the proposed iterative multiuser detection method also has a large system capacity. PMID:28212328
NASA Astrophysics Data System (ADS)
Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.
2014-09-01
Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.
Self-consistent hybrid functionals for solids: a fully-automated implementation
NASA Astrophysics Data System (ADS)
Erba, A.
2017-08-01
A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
Iterative algorithm for joint zero diagonalization with application in blind source separation.
Zhang, Wei-Tao; Lou, Shun-Tian
2011-07-01
A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.
NASA Astrophysics Data System (ADS)
Shedge, Sapana V.; Pal, Sourav; Köster, Andreas M.
2011-07-01
Recently, two non-iterative approaches have been proposed to calculate response properties within density functional theory (DFT). These approaches are auxiliary density perturbation theory (ADPT) and the non-iterative approach to the coupled-perturbed Kohn-Sham (NIA-CPKS) method. Though both methods are non-iterative, they use different techniques to obtain the perturbed Kohn-Sham matrix. In this Letter, for the first time, both of these two independent methods have been used for the calculation of dipole-quadrupole polarizabilities. To validate these methods, three tetrahedral molecules viz., P4,CH4 and adamantane (C10H16) have been used as examples. The comparison with MP2 and CCSD proves the reliability of the methodology.
Perceptron Genetic to Recognize Openning Strategy Ruy Lopez
NASA Astrophysics Data System (ADS)
Azmi, Zulfian; Mawengkang, Herman
2018-01-01
The application of Perceptron method is not effective for coding on hardware based systems because it is not real time learning. With Genetic algorithm approach in calculating and searching the best weight (fitness value) system will do learning only one iteration. And the results of this analysis were tested in the case of the introduction of the opening pattern of chess Ruy Lopez. The Analysis with Perceptron Model with Algorithm Approach Genetics from group Artificial Neural Network for open Ruy Lopez. The data is processed with base open chess, with step eight a position white Pion from end open chess. Using perceptron method have many input and one output process many weight and refraction until output equal goal. Data trained and test with software Matlab and system can recognize the chess opening Ruy Lopez or Not open Ruy Lopez with Real time.
Parallelized implicit propagators for the finite-difference Schrödinger equation
NASA Astrophysics Data System (ADS)
Parker, Jonathan; Taylor, K. T.
1995-08-01
We describe the application of block Gauss-Seidel and block Jacobi iterative methods to the design of implicit propagators for finite-difference models of the time-dependent Schrödinger equation. The block-wise iterative methods discussed here are mixed direct-iterative methods for solving simultaneous equations, in the sense that direct methods (e.g. LU decomposition) are used to invert certain block sub-matrices, and iterative methods are used to complete the solution. We describe parallel variants of the basic algorithm that are well suited to the medium- to coarse-grained parallelism of work-station clusters, and MIMD supercomputers, and we show that under a wide range of conditions, fine-grained parallelism of the computation can be achieved. Numerical tests are conducted on a typical one-electron atom Hamiltonian. The methods converge robustly to machine precision (15 significant figures), in some cases in as few as 6 or 7 iterations. The rate of convergence is nearly independent of the finite-difference grid-point separations.
Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic
NASA Astrophysics Data System (ADS)
Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.
2015-11-01
Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.
Active Interaction Mapping as a tool to elucidate hierarchical functions of biological processes.
Farré, Jean-Claude; Kramer, Michael; Ideker, Trey; Subramani, Suresh
2017-07-03
Increasingly, various 'omics data are contributing significantly to our understanding of novel biological processes, but it has not been possible to iteratively elucidate hierarchical functions in complex phenomena. We describe a general systems biology approach called Active Interaction Mapping (AI-MAP), which elucidates the hierarchy of functions for any biological process. Existing and new 'omics data sets can be iteratively added to create and improve hierarchical models which enhance our understanding of particular biological processes. The best datatypes to further improve an AI-MAP model are predicted computationally. We applied this approach to our understanding of general and selective autophagy, which are conserved in most eukaryotes, setting the stage for the broader application to other cellular processes of interest. In the particular application to autophagy-related processes, we uncovered and validated new autophagy and autophagy-related processes, expanded known autophagy processes with new components, integrated known non-autophagic processes with autophagy and predict other unexplored connections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefferkoetter, Joshua, E-mail: dnrjds@nus.edu.sg; Ouyang, Jinsong; Rakvongthai, Yothin
2014-06-15
Purpose: A study was designed to investigate the impact of time-of-flight (TOF) and point spread function (PSF) modeling on the detectability of myocardial defects. Methods: Clinical FDG-PET data were used to generate populations of defect-present and defect-absent images. Defects were incorporated at three contrast levels, and images were reconstructed by ordered subset expectation maximization (OSEM) iterative methods including ordinary Poisson, alone and with PSF, TOF, and PSF+TOF. Channelized Hotelling observer signal-to-noise ratio (SNR) was the surrogate for human observer performance. Results: For three iterations, 12 subsets, and no postreconstruction smoothing, TOF improved overall defect detection SNR by 8.6% as comparedmore » to its non-TOF counterpart for all the defect contrasts. Due to the slow convergence of PSF reconstruction, PSF yielded 4.4% less SNR than non-PSF. For reconstruction parameters (iteration number and postreconstruction smoothing kernel size) optimizing observer SNR, PSF showed larger improvement for faint defects. The combination of TOF and PSF improved mean detection SNR as compared to non-TOF and non-PSF counterparts by 3.0% and 3.2%, respectively. Conclusions: For typical reconstruction protocol used in clinical practice, i.e., less than five iterations, TOF improved defect detectability. In contrast, PSF generally yielded less detectability. For large number of iterations, TOF+PSF yields the best observer performance.« less
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
A highly parallel multigrid-like method for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Tuminaro, Ray S.
1989-01-01
We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.
Liu, Chen-Yi; Goertzen, Andrew L
2013-07-21
An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.
Real-time MRI-guided hyperthermia treatment using a fast adaptive algorithm
NASA Astrophysics Data System (ADS)
Stakhursky, Vadim L.; Arabe, Omar; Cheng, Kung-Shan; MacFall, James; Maccarini, Paolo; Craciunescu, Oana; Dewhirst, Mark; Stauffer, Paul; Das, Shiva K.
2009-04-01
Magnetic resonance (MR) imaging is promising for monitoring and guiding hyperthermia treatments. The goal of this work is to investigate the stability of an algorithm for online MR thermal image guided steering and focusing of heat into the target volume. The control platform comprised a four-antenna mini-annular phased array (MAPA) applicator operating at 140 MHz (used for extremity sarcoma heating) and a GE Signa Excite 1.5 T MR system, both of which were driven by a control workstation. MR proton resonance frequency shift images acquired during heating were used to iteratively update a model of the heated object, starting with an initial finite element computed model estimate. At each iterative step, the current model was used to compute a focusing vector, which was then used to drive the next iteration, until convergence. Perturbation of the driving vector was used to prevent the process from stalling away from the desired focus. Experimental validation of the performance of the automatic treatment platform was conducted with two cylindrical phantom studies, one homogeneous and one muscle equivalent with tumor tissue (conductivity 50% higher) inserted, with initial focal spots being intentionally rotated 90° and 50° away from the desired focus, mimicking initial setup errors in applicator rotation. The integrated MR-HT treatment platform steered the focus of heating into the desired target volume in two quite different phantom tissue loads which model expected patient treatment configurations. For the homogeneous phantom test where the target was intentionally offset by 90° rotation of the applicator, convergence to the proper phase focus in the target occurred after 16 iterations of the algorithm. For the more realistic test with a muscle equivalent phantom with tumor inserted with 50° applicator displacement, only two iterations were necessary to steer the focus into the tumor target. Convergence improved the heating efficacy (the ratio of integral temperature in the tumor to integral temperature in normal tissue) by up to six-fold, compared to the first iteration. The integrated MR-HT treatment algorithm successfully steered the focus of heating into the desired target volume for both the simple homogeneous and the more challenging muscle equivalent phantom with tumor insert models of human extremity sarcomas after 16 and 2 iterations, correspondingly. The adaptive method for MR thermal image guided focal steering shows promise when tested in phantom experiments on a four-antenna phased array applicator.
NASA Astrophysics Data System (ADS)
Lavery, N.; Taylor, C.
1999-07-01
Multigrid and iterative methods are used to reduce the solution time of the matrix equations which arise from the finite element (FE) discretisation of the time-independent equations of motion of the incompressible fluid in turbulent motion. Incompressible flow is solved by using the method of reduce interpolation for the pressure to satisfy the Brezzi-Babuska condition. The k-l model is used to complete the turbulence closure problem. The non-symmetric iterative matrix methods examined are the methods of least squares conjugate gradient (LSCG), biconjugate gradient (BCG), conjugate gradient squared (CGS), and the biconjugate gradient squared stabilised (BCGSTAB). The multigrid algorithm applied is based on the FAS algorithm of Brandt, and uses two and three levels of grids with a V-cycling schedule. These methods are all compared to the non-symmetric frontal solver. Copyright
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Wang, G.L.; Chew, W.C.; Cui, T.J.; Aydiner, A.A.; Wright, D.L.; Smith, D.V.
2004-01-01
Three-dimensional (3D) subsurface imaging by using inversion of data obtained from the very early time electromagnetic system (VETEM) was discussed. The study was carried out by using the distorted Born iterative method to match the internal nonlinear property of the 3D inversion problem. The forward solver was based on the total-current formulation bi-conjugate gradient-fast Fourier transform (BCCG-FFT). It was found that the selection of regularization parameter follow a heuristic rule as used in the Levenberg-Marquardt algorithm so that the iteration is stable.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew
2010-06-01
A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.