Implementation of DSC model and application for analysis of field pile tests under cyclic loading
NASA Astrophysics Data System (ADS)
Shao, Changming; Desai, Chandra S.
2000-05-01
The disturbed state concept (DSC) model, and a new and simplified procedure for unloading and reloading behavior are implemented in a nonlinear finite element procedure for dynamic analysis for coupled response of saturated porous materials. The DSC model is used to characterize the cyclic behavior of saturated clays and clay-steel interfaces. In the DSC, the relative intact (RI) behavior is characterized by using the hierarchical single surface (HISS) plasticity model; and the fully adjusted (FA) behavior is modeled by using the critical state concept. The DSC model is validated with respect to laboratory triaxial tests for clay and shear tests for clay-steel interfaces. The computer procedure is used to predict field behavior of an instrumented pile subjected to cyclic loading. The predictions provide very good correlation with the field data. They also yield improved results compared to those from a HISS model with anisotropic hardening, partly because the DSC model allows for degradation or softening and interface response.
Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Schick, Christoph; Jayaraman, Saivenkataraman; Maginn, Edward J
2012-07-14
We used DSC for determination of the reaction enthalpy of the synthesis of the ionic liquid [C(4)mim][Cl]. A combination of DSC and quantum chemical calculations presents a new, indirect way to study thermodynamics of ionic liquids. The new procedure was validated with two direct experimental measurements and MD simulations.
3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Yuru, E-mail: peiyuru@cis.pku.edu.cn; Ai, Xin
Purpose: Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. Methods: The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3Dmore » exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. Results: The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. Conclusions: The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.« less
3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images.
Pei, Yuru; Ai, Xingsheng; Zha, Hongbin; Xu, Tianmin; Ma, Gengyu
2016-09-01
Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.
Iterating between lessons on concepts and procedures can improve mathematics knowledge.
Rittle-Johnson, Bethany; Koedinger, Kenneth
2009-09-01
Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. The purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures. In two classroom experiments, sixth-grade students from two schools participated (N=77 and 26). Students completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons. In both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments. An iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-19
...: Section 80.103, Digital Selective Calling (DSC) Operating Procedures--Maritime Mobile Identity (MMSI...: Individuals or households; business or other for- profit entities and Federal Government. Number of... Marine VHF radios with Digital Selective Calling (DSC) capability in this collection. The licensee...
Toledo-Núñez, Citlali; Vera-Robles, L Iraís; Arroyo-Maya, Izlia J; Hernández-Arana, Andrés
2016-09-15
A frequent outcome in differential scanning calorimetry (DSC) experiments carried out with large proteins is the irreversibility of the observed endothermic effects. In these cases, DSC profiles are analyzed according to methods developed for temperature-induced denaturation transitions occurring under kinetic control. In the one-step irreversible model (native → denatured) the characteristics of the observed single-peaked endotherm depend on the denaturation enthalpy and the temperature dependence of the reaction rate constant, k. Several procedures have been devised to obtain the parameters that determine the variation of k with temperature. Here, we have elaborated on one of these procedures in order to analyze more complex DSC profiles. Synthetic data for a heat capacity curve were generated according to a model with two sequential reactions; the temperature dependence of each of the two rate constants involved was determined, according to the Eyring's equation, by two fixed parameters. It was then shown that our deconvolution procedure, by making use of heat capacity data alone, permits to extract the parameter values that were initially used. Finally, experimental DSC traces showing two and three maxima were analyzed and reproduced with relative success according to two- and four-step sequential models. Copyright © 2016 Elsevier Inc. All rights reserved.
47 CFR 80.103 - Digital selective calling (DSC) operating procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DSC “Acknowledgment of distress calls” and “Distress relays.” (See subpart W of this part.) (d) Group calls to vessels under the common control of a single entity are authorized. A group call identity may... (ITU), Place des Nations, CH-1211 Geneva 20, Switzerland. [68 FR 46961, Aug. 7, 2003, as amended at 73...
47 CFR 80.103 - Digital selective calling (DSC) operating procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... DSC “Acknowledgment of distress calls” and “Distress relays.” (See subpart W of this part.) (d) Group calls to vessels under the common control of a single entity are authorized. A group call identity may... (ITU), Place des Nations, CH-1211 Geneva 20, Switzerland. [68 FR 46961, Aug. 7, 2003, as amended at 73...
Nelson-McMillan, Kristen; Hornik, Christoph P; He, Xia; Vricella, Luca A; Jacobs, Jeffrey P; Hill, Kevin D; Pasquali, Sara K; Alejo, Diane E; Cameron, Duke E; Jacobs, Marshall L
2016-11-01
Delayed sternal closure (DSC) is commonly used to optimize hemodynamic stability after neonatal and infant heart surgery. We hypothesized that duration of sternum left open (SLO) was associated with rate of infection complications, and that location of sternal closure may mitigate infection risk. Infants (age ≤365 days) undergoing index operations with cardiopulmonary bypass and DSC at STS Congenital Heart Surgery Database centers (from 2007 to 2013) with adequate data quality were included. Primary outcome was occurrence of infection complication, defined as one or more of the following: endocarditis, pneumonia, wound infection, wound dehiscence, sepsis, or mediastinitis. Multivariable regression models were fit to assess association of infection complication with: duration of SLO (days), location of DSC procedure (operating room versus elsewhere), and patient and procedural factors. Of 6,127 index operations with SLO at 100 centers, median age and weight were 8 days (IQR, 5-24) and 3.3 kg (IQR, 2.9-3.8); 66% of operations were STAT morbidity category 4 or 5. At least one infection complication occurred in 18.7%, compared with 6.6% among potentially eligible neonates and infants without SLO. Duration of SLO (median, 3 days; IQR, 2-5) was associated with an increased rate of infection complications (p < 0.001). Location of DSC procedure was operating room (16%), intensive care unit (67%), or other (17%). Location of DSC was not associated with rate of infection complications (p = 0.45). Rate of occurrence of infectious complications is high among infants with sternum left open following cardiac surgery. Longer duration of SLO is associated with increased infection complications. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner
2001-01-01
Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…
Iterating between Lessons on Concepts and Procedures Can Improve Mathematics Knowledge
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Koedinger, Kenneth
2009-01-01
Background: Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. Aims: The purpose of the current study was to evaluate the…
Marikkar, Jalaldeen Mohammed Nazrim; Rana, Sohel
2014-01-01
A study was conducted to detect and quantify lard stearin (LS) content in canola oil (CaO) using differential scanning calorimetry (DSC). Authentic samples of CaO were obtained from a reliable supplier and the adulterant LS were obtained through a fractional crystallization procedure as reported previously. Pure CaO samples spiked with LS in levels ranging from 5 to 15% (w/w) were analyzed using DSC to obtain their cooling and heating profiles. The results showed that samples contaminated with LS at 5% (w/w) level can be detected using characteristic contaminant peaks appearing in the higher temperature regions (0 to 70°C) of the cooling and heating curves. Pearson correlation analysis of LS content against individual DSC parameters of the adulterant peak namely peak temperature, peak area, peak onset temperature indicated that there were strong correlations between these with the LS content of the CaO admixtures. When these three parameters were engaged as variables in the execution of the stepwise regression procedure, predictive models for determination of LS content in CaO were obtained. The predictive models obtained with single DSC parameter had relatively lower coefficient of determination (R(2) value) and higher standard error than the models obtained using two DSC parameters in combination. This study concluded that the predictive models obtained with peak area and peak onset temperature of the adulteration peak would be more accurate for prediction of LS content in CaO based on the highest coefficient of determination (R(2) value) and smallest standard error.
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
Sensitivity calculations for iteratively solved problems
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1985-01-01
The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.
Rodríguez Chialanza, Mauricio; Sierra, Ignacio; Pérez Parada, Andrés; Fornaro, Laura
2018-06-01
There are several techniques used to analyze microplastics. These are often based on a combination of visual and spectroscopic techniques. Here we introduce an alternative workflow for identification and mass quantitation through a combination of optical microscopy with image analysis (IA) and differential scanning calorimetry (DSC). We studied four synthetic polymers with environmental concern: low and high density polyethylene (LDPE and HDPE, respectively), polypropylene (PP), and polyethylene terephthalate (PET). Selected experiments were conducted to investigate (i) particle characterization and counting procedures based on image analysis with open-source software, (ii) chemical identification of microplastics based on DSC signal processing, (iii) dependence of particle size on DSC signal, and (iv) quantitation of microplastics mass based on DSC signal. We describe the potential and limitations of these techniques to increase reliability for microplastic analysis. Particle size demonstrated to have particular incidence in the qualitative and quantitative performance of DSC signals. Both, identification (based on characteristic onset temperature) and mass quantitation (based on heat flow) showed to be affected by particle size. As a result, a proper sample treatment which includes sieving of suspended particles is particularly required for this analytical approach.
Rongeat, Carine; Llamas-Jansa, Isabel; Doppiu, Stefania; Deledda, Stefano; Borgschulte, Andreas; Schultz, Ludwig; Gutfleisch, Oliver
2007-11-22
Among the thermodynamic properties of novel materials for solid-state hydrogen storage, the heat of formation/decomposition of hydrides is the most important parameter to evaluate the stability of the compound and its temperature and pressure of operation. In this work, the desorption and absorption behaviors of three different classes of hydrides are investigated under different hydrogen pressures using high-pressure differential scanning calorimetry (HP-DSC). The HP-DSC technique is used to estimate the equilibrium pressures as a function of temperature, from which the heat of formation is derived. The relevance of this procedure is demonstrated for (i) magnesium-based compounds (Ni-doped MgH2), (ii) Mg-Co-based ternary hydrides (Mg-CoHx) and (iii) Alanate complex hydrides (Ti-doped NaAlH4). From these results, it can be concluded that HP-DSC is a powerful tool to obtain a good approximation of the thermodynamic properties of hydride compounds by a simple and fast study of desorption and absorption properties under different pressures.
Global Asymptotic Behavior of Iterative Implicit Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1994-01-01
The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.
NASA Astrophysics Data System (ADS)
Desai, C. S.; Sane, S. M.; Jenson, J. W.; Contractor, D. N.; Carlson, A. E.; Clark, P. U.
2006-12-01
This presentation, which is complementary to Part I (Jenson et al.), describes the application of the Disturbed State Concept (DSC) constitutive model to define the behavior of the deforming sediment (till) underlying glaciers and ice sheets. The DSC includes elastic, plastic, and creep strains, and microstructural changes leading to degradation, failure, and sometimes strengthening or healing. Here, we describe comprehensive laboratory experiments conducted on samples of two regionally significant tills deposited by the Laurentide Ice Sheet: the Tiskilwa Till and Sky Pilot Till. The tests are used to determine the parameters to calibrate the DSC model, which is validated with respect to the laboratory tests by comparing the predictions with test data used to find the parameters, and also comparing them with independent tests not used to find the parameters. Discussion of the results also includes comparison of the DSC model with the classical Mohr-Coulomb model, which has been commonly used for glacial tills. A numerical procedure based on finite element implementation of the DSC is used to simulate an idealized field problem, and its predictions are discussed. Based on these analyses, the unified DSC model is proposed to provide an improved model for subglacial tills compared to other models used commonly, and thus to provide the potential for improved predictions of ice sheet movements.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi
2016-01-28
Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less
International Round-Robin Testing of Bulk Thermoelectrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hsin; Porter, Wallace D; Bottner, Harold
2011-11-01
Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less
NASA Technical Reports Server (NTRS)
Gossard, Myron L
1952-01-01
An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
Forward marching procedure for separated boundary-layer flows
NASA Technical Reports Server (NTRS)
Carter, J. E.; Wornom, S. F.
1975-01-01
A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1975-01-01
New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.
A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis
ERIC Educational Resources Information Center
Cao, Mengyang; Tay, Louis; Liu, Yaowu
2017-01-01
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Performance evaluation of an automatic MGRF-based lung segmentation approach
NASA Astrophysics Data System (ADS)
Soliman, Ahmed; Khalifa, Fahmi; Alansary, Amir; Gimel'farb, Georgy; El-Baz, Ayman
2013-10-01
The segmentation of the lung tissues in chest Computed Tomography (CT) images is an important step for developing any Computer-Aided Diagnostic (CAD) system for lung cancer and other pulmonary diseases. In this paper, we introduce a new framework for validating the accuracy of our developed Joint Markov-Gibbs based lung segmentation approach using 3D realistic synthetic phantoms. These phantoms are created using a 3D Generalized Gauss-Markov Random Field (GGMRF) model of voxel intensities with pairwise interaction to model the 3D appearance of the lung tissues. Then, the appearance of the generated 3D phantoms is simulated based on iterative minimization of an energy function that is based on the learned 3D-GGMRF image model. These 3D realistic phantoms can be used to evaluate the performance of any lung segmentation approach. The performance of our segmentation approach is evaluated using three metrics, namely, the Dice Similarity Coefficient (DSC), the modified Hausdorff distance, and the Average Volume Difference (AVD) between our segmentation and the ground truth. Our approach achieves mean values of 0.994±0.003, 8.844±2.495 mm, and 0.784±0.912 mm3, for the DSC, Hausdorff distance, and the AVD, respectively.
Improved evaluation of optical depth components from Langley plot data
NASA Technical Reports Server (NTRS)
Biggar, S. F.; Gellman, D. I.; Slater, P. N.
1990-01-01
A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D. C.; Gu, X.; Haldenman, S.
The curing of cross-linkable encapsulation is a critical consideration for photovoltaic (PV) modules manufactured using a lamination process. Concerns related to ethylene-co-vinyl acetate (EVA) include the quality (e.g., expiration and uniformity) of the films or completion (duration) of the cross-linking of the EVA within a laminator. Because these issues are important to both EVA and module manufacturers, an international standard has recently been proposed by the Encapsulation Task-Group within the Working Group 2 (WG2) of the International Electrotechnical Commission (IEC) Technical Committee 82 (TC82) for the quantification of the degree of cure for EVA encapsulation. The present draft of themore » standard calls for the use of differential scanning calorimetry (DSC) as the rapid, enabling secondary (test) method. Both the residual enthalpy- and melt/freeze-DSC methods are identified. The DSC methods are calibrated against the gel content test, the primary (reference) method. Aspects of other established methods, including indentation and rotor cure metering, were considered by the group. Key details of the test procedure will be described.« less
The role of simulation in the design of a neural network chip
NASA Technical Reports Server (NTRS)
Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.
1993-01-01
An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derksen, A; Koenig, L; Heldmann, S
Purpose: To improve results of deformable image registration (DIR) in adaptive radiotherapy for large bladder deformations in CT/CBCT pelvis imaging. Methods: A variational multi-modal DIR algorithm is incorporated in a joint iterative scheme, alternating between segmentation based bladder matching and registration. Using an initial DIR to propagate the bladder contour to the CBCT, in a segmentation step the contour is improved by discrete image gradient sampling along all surface normals and adapting the delineation to match the location of each maximum (with a search range of +−5/2mm at the superior/inferior bladder side and step size of 0.5mm). An additional graph-cutmore » based constraint limits the maximum difference between neighboring points. This improved contour is utilized in a subsequent DIR with a surface matching constraint. By calculating an euclidean distance map of the improved contour surface, the new constraint enforces the DIR to map each point of the original contour onto the improved contour. The resulting deformation is then used as a starting guess to compute a deformation update, which can again be used for the next segmentation step. The result is a dense deformation, able to capture much larger bladder deformations. The new method is evaluated on ten CT/CBCT male pelvis datasets, calculating Dice similarity coefficients (DSC) between the final propagated bladder contour and a manually delineated gold standard on the CBCT image. Results: Over all ten cases, an average DSC of 0.93±0.03 is achieved on the bladder. Compared with the initial DIR (0.88±0.05), the DSC is equal (2 cases) or improved (8 cases). Additionally, DSC accuracy of femoral bones (0.94±0.02) was not affected. Conclusion: The new approach shows that using the presented alternating segmentation/registration approach, the results of bladder DIR in the pelvis region can be greatly improved, especially for cases with large variations in bladder volume. Fraunhofer MEVIS received funding from a research grant by Varian Medical Systems.« less
77 FR 28383 - Information Collection Being Reviewed by the Federal Communications Commission
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-14
..., Digital Selective Calling (DSC) Operating Procedures--Maritime Mobile Identity (MMSI). Form Number: N/A... Annual Cost: N/A. Privacy Impact Assessment: Yes. The Commission maintains a system of records notice (SORN), FCC/WTB-1, ``Wireless Services Licensing Records,'' that covers this collection, purpose(s...
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
2014-10-01
nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D.; Wohlgemuth, J.; Gu, X.
2013-11-01
The curing of cross-linkable encapsulation is a critical consideration for photovoltaic (PV) modules manufactured using a lamination process. Concerns related to ethylene-co-vinyl acetate (EVA) include the quality (e.g., expiration and uniformity) of the films or completion (duration) of the cross-linking of the EVA within a laminator. Because these issues are important to both EVA and module manufacturers, an international standard has recently been proposed by the Encapsulation Task-Group within the Working Group 2 (WG2) of the International Electrotechnical Commission (IEC) Technical Committee 82 (TC82) for the quantification of the degree of cure for EVA encapsulation. The present draft of themore » standard calls for the use of differential scanning calorimetry (DSC) as the rapid, enabling secondary (test) method. Both the residual enthalpy- and melt/freeze-DSC methods are identified. The DSC methods are calibrated against the gel content test, the primary (reference) method. Aspects of other established methods, including indentation and rotor cure metering, were considered by the group. Key details of the test procedure will be described.« less
Transport synthetic acceleration with opposing reflecting boundary conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zika, M.R.; Adams, M.L.
2000-02-01
The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less
Differential Scanning Calorimetry Techniques: Applications in Biology and Nanoscience
Gill, Pooria; Moghadam, Tahereh Tohidi; Ranjbar, Bijan
2010-01-01
This paper reviews the best-known differential scanning calorimetries (DSCs), such as conventional DSC, microelectromechanical systems-DSC, infrared-heated DSC, modulated-temperature DSC, gas flow-modulated DSC, parallel-nano DSC, pressure perturbation calorimetry, self-reference DSC, and high-performance DSC. Also, we describe here the most extensive applications of DSC in biology and nanoscience. PMID:21119929
Upwind relaxation methods for the Navier-Stokes equations using inner iterations
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.
1992-01-01
A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
Self-consistent hybrid functionals for solids: a fully-automated implementation
NASA Astrophysics Data System (ADS)
Erba, A.
2017-08-01
A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
47 CFR 80.1123 - Watch requirements for ship stations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Watch requirements for ship stations. 80.1123... Procedures for Distress and Safety Communications § 80.1123 Watch requirements for ship stations. (a) While at sea, all ships must maintain a continuous watch: (1) On VHF DSC channel 70, if the ship is fitted...
47 CFR 80.1123 - Watch requirements for ship stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Watch requirements for ship stations. 80.1123... Procedures for Distress and Safety Communications § 80.1123 Watch requirements for ship stations. (a) While at sea, all ships must maintain a continuous watch: (1) On VHF DSC channel 70, if the ship is fitted...
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Convergence of an iterative procedure for large-scale static analysis of structural components
NASA Technical Reports Server (NTRS)
Austin, F.; Ojalvo, I. U.
1976-01-01
The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
NASA Technical Reports Server (NTRS)
Sankaran, V.
1974-01-01
An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.
Non-iterative distance constraints enforcement for cloth drapes simulation
NASA Astrophysics Data System (ADS)
Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno
2016-03-01
A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
Solving Differential Equations Using Modified Picard Iteration
ERIC Educational Resources Information Center
Robin, W. A.
2010-01-01
Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…
NASA Astrophysics Data System (ADS)
Wang, K.; Luo, Y.; Yang, Y.
2016-12-01
We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.
Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko
2010-01-01
Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574
Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Hofmann, Rich
1995-01-01
Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)
Freire, Paulo G L; Ferrari, Ricardo J
2016-06-01
Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the solution of evolution equations based on multigrid and explicit iterative methods
NASA Astrophysics Data System (ADS)
Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.
2015-08-01
Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.
NASA Technical Reports Server (NTRS)
Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.
1987-01-01
In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.
2014-02-01
idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Q; Yan, D
2014-06-01
Purpose: Evaluate the accuracy of atlas-based auto segmentation of organs at risk (OARs) on both helical CT (HCT) and cone beam CT (CBCT) images in head and neck (HN) cancer adaptive radiotherapy (ART). Methods: Six HN patients treated in the ART process were included in this study. For each patient, three images were selected: pretreatment planning CT (PreTx-HCT), in treatment CT for replanning (InTx-HCT) and a CBCT acquired in the same day of the InTx-HCT. Three clinical procedures of auto segmentation and deformable registration performed in the ART process were evaluated: a) auto segmentation on PreTx-HCT using multi-subject atlases, b)more » intra-patient propagation of OARs from PreTx-HCT to InTx-HCT using deformable HCT-to-HCT image registration, and c) intra-patient propagation of OARs from PreTx-HCT to CBCT using deformable CBCT-to-HCT image registration. Seven OARs (brainstem, cord, L/R parotid, L/R submandibular gland and mandible) were manually contoured on PreTx-HCT and InTx-HCT for comparison. In addition, manual contours on InTx-CT were copied on the same day CBCT, and a local region rigid body registration was performed accordingly for each individual OAR. For procedures a) and b), auto contours were compared to manual contours, and for c) auto contours were compared to those rigidly transferred contours on CBCT. Dice similarity coefficients (DSC) and mean surface distances of agreement (MSDA) were calculated for evaluation. Results: For procedure a), the mean DSC/MSDA of most OARs are >80%/±2mm. For intra-patient HCT-to-HCT propagation, the Resultimproved to >85%/±1.5mm. Compared to HCT-to-HCT, the mean DSC for HCT-to-CBCT propagation drops ∼2–3% and MSDA increases ∼0.2mm. This Resultindicates that the inferior imaging quality of CBCT seems only degrade auto propagation performance slightly. Conclusion: Auto segmentation and deformable propagation can generate OAR structures on HCT and CBCT images with clinically acceptable accuracy. Therefore, they can be reliably implemented in the clinical HN ART process.« less
On Nonequivalence of Several Procedures of Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2005-01-01
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…
Comparing Instructional Strategies for Integrating Conceptual and Procedural Knowledge.
ERIC Educational Resources Information Center
Rittle-Johnson, Bethany; Koedinger, Kenneth R.
We compared alternative instructional strategies for integrating knowledge of decimal place value and regrouping concepts with procedures for adding and subtracting decimals. The first condition was based on recent research suggesting that conceptual and procedural knowledge develop in an iterative, hand over hand fashion. In this iterative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paliwal, B; Asprey, W; Yan, Y
Purpose: In order to take advantage of the high resolution soft tissue imaging available in MR images, we investigated 3D images obtained with the low field 0.35 T MR in ViewRay to serve as an alternative to CT scans for radiotherapy treatment planning. In these images, normal and target structure delineation can be visualized. Assessment is based upon comparison with the CT images and the ability to produce comparable contours. Methods: Routine radiation oncology CT scans were acquired on five patients. Contours of brain, brainstem, esophagus, heart, lungs, spinal cord, and the external body were drawn. The same five patientsmore » were then scanned on the ViewRay TrueFISP-based imaging pulse sequence. The same organs were selected on the MR images and compared to those from the CT scan. Physical volume and the Dice Similarity Coefficient (DSC) were used to assess the contours from the two systems. Image quality stability was quantitatively ensured throughout the study following the recommendations of the ACR MR accreditation procedure. Results: The highest DSC of 0.985, 0.863, and 0.843 were observed for brain, lungs, and heart respectively. On the other hand, the brainstem, spinal cord, and esophagus had the lowest DSC. Volume agreement was most satisfied for the heart (within 5%) and the brain (within 2%). Contour volume for the brainstem and lung (a widely dynamic organ) varied the most (27% and 19%). Conclusion: The DSC and volume measurements suggest that the results obtained from ViewRay images are quantitatively consistent and comparable to those obtained from CT scans for the brain, heart, and lungs. MR images from ViewRay are well-suited for treatment planning and for adaptive MRI-guided radiotherapy. The physical data from 0.35 T MR imaging is consistent with our geometrical understanding of normal structures.« less
Mosharraf, Mitra
2004-05-01
When determining the degree of disorder of a lyophilized cake of a protein, it is important to use an appropriate analytical technique. Differential scanning calorimetry (DSC) and X-ray powder diffraction (XRPD) are the most commonly used thermoanalytical techniques for characterizing freeze-dried protein formulations. Unfortunately, these methods are unable to detect solid-state disorder at levels < 10%. Also, interpretation of DSC results for freeze-dried protein formulations can be difficult, as a result of the more complex thermal events occurring with this technique. For example, proteins can inhibit the thermally induced recrystallization of the lyophilized cake, resulting in potential misinterpretation of DSC degree of disorder results. The aim of this investigation was to study the use of isothermal microcalorimetry (IMC) in the assessment of degree of solid-state disorder (amorphicity) of lyophilized formulations of proteins. For this purpose, two formulations of growth hormone were prepared by lyophilization. These formulations consisted of the same amounts of protein, mannitol, glycine, and phosphate buffer, but differed in the freeze-drying procedure. After lyophilization, the recrystallization of the samples was studied using IMC at 25 degrees C under different relative humidities (58-75%). The effect of available surface area was studied by determining the heat of recrystallization (Q) of the samples before and after disintegration of the cakes. The results showed that, in contrast to DSC, IMC allowed detection of the recrystallization event in the formulations. Although both formulations were completely disordered and indistinguishable according to XRPD method, IMC revealed that formulation B had a different solid-sate structure than formulation A. This difference was the result of differences in the freeze-drying parameters, demonstrating the importance of choosing appropriate analytical methodology.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1990-01-01
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.
ERIC Educational Resources Information Center
Hilchey, Christian Thomas
2014-01-01
This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…
Probing Protein Sequences as Sources for Encrypted Antimicrobial Peptides
Brand, Guilherme D.; Magalhães, Mariana T. Q.; Tinoco, Maria L. P.; Aragão, Francisco J. L.; Nicoli, Jacques; Kelly, Sharon M.; Cooper, Alan; Bloch, Carlos
2012-01-01
Starting from the premise that a wealth of potentially biologically active peptides may lurk within proteins, we describe here a methodology to identify putative antimicrobial peptides encrypted in protein sequences. Candidate peptides were identified using a new screening procedure based on physicochemical criteria to reveal matching peptides within protein databases. Fifteen such peptides, along with a range of natural antimicrobial peptides, were examined using DSC and CD to characterize their interaction with phospholipid membranes. Principal component analysis of DSC data shows that the investigated peptides group according to their effects on the main phase transition of phospholipid vesicles, and that these effects correlate both to antimicrobial activity and to the changes in peptide secondary structure. Consequently, we have been able to identify novel antimicrobial peptides from larger proteins not hitherto associated with such activity, mimicking endogenous and/or exogenous microorganism enzymatic processing of parent proteins to smaller bioactive molecules. A biotechnological application for this methodology is explored. Soybean (Glycine max) plants, transformed to include a putative antimicrobial protein fragment encoded in its own genome were tested for tolerance against Phakopsora pachyrhizi, the causative agent of the Asian soybean rust. This procedure may represent an inventive alternative to the transgenic technology, since the genetic material to be used belongs to the host organism and not to exogenous sources. PMID:23029273
Biodiesel: Characterization by DSC and P-DSC
NASA Astrophysics Data System (ADS)
Chiriac, Rodica; Toche, François; Brylinski, Christian
Thermal analytical methods such as differential scanning calorimetry (DSC) have been successfully applied to neat petrodiesel and engine oils in the last 25 years. This chapter shows how DSC and P-DSC (pressurized DSC) techniques can be used to compare, characterize, and predict some properties of alternative non-petroleum fuels, such as cold flow behavior and oxidative stability. These two properties are extremely important with respect to the operability, transport, and long-term storage of biodiesel fuel. It is shown that the quantity of unsaturated fatty acids in the fuel composition has an important impact on both properties. In addition, it is shown that the impact of fuel additives on the oxidative stability or the cold flow behavior of biodiesel can be studied by means of DSC and P-DSC techniques. Thermomicroscopy can also be used to study the cold flow behavior of biodiesel, giving information on the size and the morphology of crystals formed at low temperature.
Comparison of atlas-based techniques for whole-body bone segmentation.
Arabi, Hossein; Zaidi, Habib
2017-02-01
We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted bone obtained from both SBA (DSC= 0.56 ± 0.05) and Hofmann's methods (DSC= 0.60 ± 0.02) exhibited no improvement compared to non-weighted IA. Finding the optimum parameters for implementation of the atlas fusion approach, such as weighting factors and image similarity patch size, have great impact on the performance of atlas-based segmentation approaches. The voxel-wise atlas fusion approach exhibited excellent performance in terms of cancelling out the non-systematic registration errors leading to accurate and reliable segmentation results. Denoising and normalization of MR images together with optimization of the involved parameters play a key role in improving bone extraction accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Loss of Desmocollin 3 in Skin Tumor Development and Progression
Chen, Jiangli; O’Shea, Charlene; Fitzpatrick, James E.; Koster, Maranke I.; Koch, Peter J.
2011-01-01
Desmocollin 3 (DSC3) is a desmosomal cadherin that is required for maintaining cell adhesion in the epidermis as demonstrated by the intra-epidermal blistering observed in Dsc3 null skin. Recently, it has been suggested that deregulated expression of DSC3 occurs in certain human tumor types. It is not clear whether DSC3 plays a role in the development or progression of cancers arising in stratified epithelia such as the epidermis. To address this issue, we generated a mouse model in which Dsc3 expression is ablated in K-Ras oncogene-induced skin tumors. Our results demonstrate that loss of Dsc3 leads to an increase in K-Ras induced skin tumors. We hypothesize that acantholysis-induced epidermal hyperplasia in the Dsc3 null epidermis facilitates Ras-induced tumor development. Further, we demonstrate that spontaneous loss of DSC3 expression is a common occurrence during human and mouse skin tumor progression. This loss occurs in tumor cells invading the dermis. Interestingly, other desmosomal proteins are still expressed in tumor cells that lack DSC3, suggesting a specific function of DSC3 loss in tumor progression. While loss of DSC3 on the skin surface leads to epidermal blistering, it does not appear to induce loss of cell-cell adhesion in tumor cells invading the dermis, most likely due to a protection of these cells within the dermis from mechanical stress. We thus hypothesize that DSC3 can contribute to the progression of tumors both by cell adhesion-dependent (skin surface) and likely by cell adhesion-independent (invading tumor cells) mechanisms. PMID:21681825
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.
Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used tomore » guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation results significantly (p < 0.01) based on DSC (6.72%) and SD of contour-to-contour distances (0.08 cm) and decreased the 95% confidence intervals of the bladder volume differences. Moreover, expanding the shape model improved the segmentation results significantly (p < 0.01) based on DSC and SD of contour-to-contour distances. Conclusions: This patient-specific shape model based automatic bladder segmentation method on CBCT is accurate and generic. Our segmentation method only needs two pretreatment imaging data sets as prior knowledge, is independent of patient gender and patient treatment position and has the possibility to manually adapt the segmentation locally.« less
Hayashi, Tetsutaro; Sentani, Kazuhiro; Oue, Naohide; Anami, Katsuhiro; Sakamoto, Naoya; Ohara, Shinya; Teishima, Jun; Noguchi, Tsuyoshi; Nakayama, Hirofumi; Taniyama, Kiyomi; Matsubara, Akio; Yasui, Wataru
2011-10-01
Urothelial carcinoma (UC) with squamous differentiation tends to present at higher stages than pure UC. To distinguish UC with squamous differentiation from pure UC, a sensitive and specific marker is needed. Desmocollin 2 (DSC2) is a protein localized in desmosomal junctions of stratified epithelium, but little is known about its biological significance in bladder cancer. We examined the utility of DSC2 as a diagnostic marker. We analysed the immunohistochemical characteristics of DSC2, and studied the relationship of DSC2 expression with the expression of the known markers uroplakin III (UPIII), cytokeratin (CK)7, CK20, epidermal growth factor receptor (EGFR), and p53. DSC2 staining was detected in 24 of 25 (96%) cases of UC with squamous differentiation, but in none of 85 (0%) cases of pure UC. DSC2 staining was detected only in areas of squamous differentiation. DSC2 expression was mutually exclusive of UPIII expression, and was correlated with EGFR expression. Furthermore, DSC2 expression was correlated with higher stage (P = 0.0314) and poor prognosis (P = 0.0477). DSC2 staining offers high sensitivity (96%) and high specificity (100%) for the detection of squamous differentiation in UC. DSC2 is a useful immunohistochemical marker for separation of UC with squamous differentiation from pure UC. 2011 Blackwell Publishing Limited.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Technical Reports Server (NTRS)
Chang, S.-C.; Adamczyk, J. J.
1986-01-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
A new approach for solving the three-dimensional steady Euler equations. I - General theory
NASA Astrophysics Data System (ADS)
Chang, S.-C.; Adamczyk, J. J.
1986-08-01
The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.
Iterative procedures for space shuttle main engine performance models
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1989-01-01
Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.
Non-iterative volumetric particle reconstruction near moving bodies
NASA Astrophysics Data System (ADS)
Mendelson, Leah; Techet, Alexandra
2017-11-01
When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.
Clinical application of plasma thermograms. Utility, practical approaches and considerations.
Garbett, Nichola C; Mekmaysy, Chongkham S; DeLeeuw, Lynn; Chaires, Jonathan B
2015-04-01
Differential scanning calorimetry (DSC) studies of blood plasma are part of an emerging area of the clinical application of DSC to biofluid analysis. DSC analysis of plasma from healthy individuals and patients with various diseases has revealed changes in the thermal profiles of the major plasma proteins associated with the clinical status of the patient. The sensitivity of DSC to the concentration of proteins, their interactions with other proteins or ligands, or their covalent modification underlies the potential utility of DSC analysis. A growing body of literature has demonstrated the versatility and performance of clinical DSC analysis across a range of biofluids and in a number of disease settings. The principles, practice and challenges of DSC analysis of plasma are described in this article. Copyright © 2014 Elsevier Inc. All rights reserved.
Clinical application of plasma thermograms. Utility, practical approaches and considerations
Garbett, Nichola C.; Mekmaysy, Chongkham S.; DeLeeuw, Lynn; Chaires, Jonathan B.
2014-01-01
Differential scanning calorimetry (DSC) studies of blood plasma are part of an emerging area of the clinical application of DSC to biofluid analysis. DSC analysis of plasma from healthy individuals and patients with various diseases has revealed changes in the thermal profiles of the major plasma proteins associated with the clinical status of the patient. The sensitivity of DSC to the concentration of proteins, their interactions with other proteins or ligands, or their covalent modifications underlies the potential utility of DSC analysis. A growing body of literature has demonstrated the versatility and performance of clinical DSC analysis across a range of biofluids and in a number of disease settings. The principles, practice and challenges of DSC analysis of plasma are described in this article. PMID:25448297
Pyrolysis of reinforced polymer composites: Parameterizing a model for multiple compositions
NASA Astrophysics Data System (ADS)
Martin, Geraldine E.
A single set of material properties was developed to describe the pyrolysis of fiberglass reinforced polyester composites at multiple composition ratios. Milligram-scale testing was performed on the unsaturated polyester (UP) resin using thermogravimetric analysis (TGA) coupled with differential scanning calorimetry (DSC) to establish and characterize an effective semi-global reaction mechanism, of three consecutive first-order reactions. Radiation-driven gasification experiments were conducted on UP resin and the fiberglass composites at compositions ranging from 41 to 54 wt% resin at external heat fluxes from 30 to 70 kW m -2. The back surface temperature was recorded with an infrared camera and used as the target for inverse analysis to determine the thermal conductivity of the systematically isolated constituent species. Manual iterations were performed in a comprehensive pyrolysis model, ThermaKin. The complete set of properties was validated for the ability to reproduce the mass loss rate during gasification testing.
Chitaev, Nikolai A.; Troyanovsky, Sergey M.
1997-01-01
Human fibrosarcoma cells, HT-1080, feature extensive adherens junctions, lack mature desmosomes, and express a single known desmosomal protein, Desmoglein 2 (Dsg2). Transfection of these cells with bovine Desmocollin 1a (Dsc1a) caused dramatic changes in the subcellular distribution of endogenous Dsg2. Both cadherins clustered in the areas of the adherens junctions, whereas only a minor portion of Dsg2 was seen in these areas in the parental cells. Deletion mapping showed that intact extracellular cadherin-like repeats of Dsc1a (Arg1-Thr170) are required for the translocation of Dsg2. Deletion of the intracellular C-domain that mediates the interaction of Dsc1a with plakoglobin, or the CSI region that is involved in the binding to desmoplakin, had no effect. Coimmunoprecipitation experiments of cell lysates stably expressing Dsc1a with anti-Dsc or -Dsg antibodies demonstrate that the desmosomal cadherins, Dsg2 and Dsc1a, are involved in a direct Ca2+-dependent interaction. This conclusion was further supported by the results of solid phase binding experiments. These showed that the Dsc1a fragment containing cadherin-like repeats 1 and 2 binds directly to the extracellular portion of Dsg in a Ca2+-dependent manner. The contribution of the Dsg/ Dsc interaction to cell–cell adhesion was tested by coculturing HT-1080 cells expressing Dsc1a with HT-1080 cells lacking Dsc but expressing myc-tagged plakoglobin (MPg). In the latter cells, MPg and the endogenous Dsg form stable complexes. The observed specific coimmunoprecipitation of MPg by anti-Dsc antibodies in coculture indicates that an intercellular interaction between Dsc1 and Dsg is involved in cell–cell adhesion. PMID:9214392
Distinct Roles of the DmNav and DSC1 Channels in the Action of DDT and Pyrethroids
Rinkevich, Frank D.; Du, Yuzhe; Tolinski, Josh; Ueda, Atsushi; Wu, Chun-Fang; Zhorov, Boris S.; Dong, Ke
2015-01-01
Voltage-gated sodium channels (Nav channels) are critical for electrical signaling in the nervous system and are the primary targets of the insecticides DDT and pyrethroids. In Drosophila melanogaster, besides the canonical Nav channel, Para (also called DmNav), there is a sodium channel-like cation channel called DSC1 (Drosophila sodium channel 1). Temperature-sensitive paralytic mutations in DmNav (parats) confer resistance to DDT and pyrethroids, whereas DSC1 knockout flies exhibit enhanced sensitivity to pyrethroids. To further define the roles and interaction of DmNav and DSC1 channels in DDT and pyrethroid neurotoxicology, we generated a DmNav/DSC1 double mutant line by introducing a parats1 allele (carrying the I265N mutation) into a DSC1 knockout line. We confirmed that the I265N mutation reduced the sensitivity to two pyrethroids, permethrin and deltamethrin of a DmNav variant expressed in Xenopus oocytes. Computer modeling predicts that the I265N mutation confers pyrethroid resistance by allosterically altering the second pyrethroid receptor site on the DmNav channel. Furthermore, we found that I265N-mediated pyrethroid resistance in parats1 mutant flies was almost completely abolished in parats1;DSC1−/− double mutant flies. Unexpectedly, however, the DSC1 knockout flies were less sensitive to DDT, compared to the control flies (w1118A), and the parats1;DSC1−/− double mutant flies were even more resistant to DDT compared to the DSC1 knockout or parats1 mutant. Our findings revealed distinct roles of the DmNav and DSC1 channels in the neurotoxicology of DDT vs. pyrethroids and implicate the exciting possibility of using DSC1 channel blockers or modifiers in the management of pyrethroid resistance. PMID:25687544
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Patterson, Dianne K.; Van Petten, Cyma; Beeson, Pélagie M.; Rapcsak, Steven Z.; Plante, Elena
2014-01-01
This paper introduces a Bidirectional Iterative Parcellation (BIP) procedure designed to identify the location and size of connected cortical regions (parcellations) at both ends of a white matter tract in diffusion weighted images. The procedure applies the FSL option “probabilistic tracking with classification targets” in a bidirectional and iterative manner. To assess the utility of BIP, we applied the procedure to the problem of parcellating a limited set of well-established gray matter seed regions associated with the dorsal (arcuate fasciculus/superior longitudinal fasciculus) and ventral (extreme capsule fiber system) white matter tracts in the language networks of 97 participants. These left hemisphere seed regions and the two white matter tracts, along with their right hemisphere homologues, provided an excellent test case for BIP because the resulting parcellations overlap and their connectivity via the arcuate fasciculi and extreme capsule fiber systems are well studied. The procedure yielded both confirmatory and novel findings. Specifically, BIP confirmed that each tract connects within the seed regions in unique, but expected ways. Novel findings included increasingly left-lateralized parcellations associated with the arcuate fasciculus/superior longitudinal fasciculus as a function of age and education. These results demonstrate that BIP is an easily implemented technique that successfully confirmed cortical connectivity patterns predicted in the literature, and has the potential to provide new insights regarding the architecture of the brain. PMID:25173414
NASA Astrophysics Data System (ADS)
Nielsen, Jens C. O.; Li, Xin
2018-01-01
An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schnepp, Elisabeth; Leonhardt, Roman
2014-05-01
The domain-state corrected multiple-specimen paleointensity determination technique (MSP-DSC, Fabian & Leonhardt, EPSL 297, 84, 2010) has been tested for archaeological baked clays and bricks. The following procedure was applied: (1) Exclusion of secondary overprints using alternating field (AF) or thermal demagnetization and assignment of characteristic remanent magnetization (ChRM) direction. (2) Determination of magneto mineralogical alteration using anhysteretic remanent magnetization (ARM) or temperature dependence of susceptibility. (3) Measurement of ARM anisotropy tensor, calculation of the ancient magnetic field direction. (4) Sister specimens were subjected to the MSP-DSC technique aligned (anti-)parallel to the ancient magnetic field direction. (5) Several checks were applied in order to exclude data points from further evaluation: (a) The accuracy of orientation (< 10°), (b) absence of secondary components (< 10°), (c) use of a considerable NRM fraction (20 to 80%), (d) weak alteration (smaller than for domain state change) and finally (e) domain state correction was applied. Bricks and baked clays from archaeological sites with ages between 645 BC and 2003 AD have been subjected to MSP-DSC absolute paleointensity (PI) determination. Aims of study are to check precision and reliability of the method. The obtained PI values are compared with direct field observation, the IGRF, the GUFM1 or Thellier results. The Thellier experiments often show curved lines and pTRM checks fail for higher temperatures. Nevertheless in the low temperature range straight lines have been obtained but they provide scattered paleointensity values. Mean paleointensites have relative errors often exceeding 10%, which are not considered as high quality PI estimates. MSP-DSC experiments for the structures older than 300 years are still under progress. The paleointensities obtained from the MSP-DSC experiments for the young materials (after 1700 AD) have small relative errors of a few or even less than one per cent, although the data points are scattered in some cases. For these sites comparison with the historical field values shows very good agreement. Small deviations could be explained by the higher cooling rates used in the laboratory. These young structures were made of bricks and the unweathered baked clay of the 2003 experimental kiln was like brick, either. The sites provided much material so that tests were done to investigate the MSP-DSC methodology further. For example it was tested, if different NRM deblocking fractions have influence on the paleointensity estimate. It seems that use of fractions lower than 20% of the NRM can lead to an underestimation of PI. Although MSP-DSC experiments carried out on different blocks of the same structure can provide very similar results, the use of several fragments from at least five different units (potshards, bricks, in situ burnt blocks or rocks) of the same structure is recommended in or to obtain a reliable estimate of the experimental errors. Five data points may define already a well constraint straight line, but for a better precision 15 (< 2%) data points may be required. For the young structures the MSP-DSC method provided reliable PI estimates which have been included into the archaeointensity data base
Code of Federal Regulations, 2010 CFR
2010-10-01
... calling (DSC) equipment has been verified by actual communications or a test call; (ii) The portable... devices which do not have integral navigation receivers, including: VHF DSC, MF DSC, satellite EPIRB and HF DSC or INMARSAT SES. On a ship without integral or directly connected navigation receiver input to...
Code of Federal Regulations, 2011 CFR
2011-10-01
... calling (DSC) equipment has been verified by actual communications or a test call; (ii) The portable... devices which do not have integral navigation receivers, including: VHF DSC, MF DSC, satellite EPIRB and HF DSC or INMARSAT SES. On a ship without integral or directly connected navigation receiver input to...
Process Simulation and Modeling for Advanced Intermetallic Alloys.
1994-06-01
calorimetry, using a Stanton Redfera/Omnitherm DOC 1500 thermal analysis system, was the primary experimental tool for this investigation...samples during both heating and cooling in a high purity argon atmosphere at a rate of 20K/min. The DSC instrumental baseline was obtained using both empty...that is capable of fitting the observed data to given cell structures using a least squares procedure. RESULTS The results of the DOC observations are
NASA Astrophysics Data System (ADS)
Klimina, L. A.
2018-05-01
The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 5 2011-10-01 2011-10-01 false Frequencies for digital selective calling (DSC... for digital selective calling (DSC). (a) General purpose calling. The following table describes the calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three...
Gas Flows in Rocket Motors. Volume 2. Appendix C. Time Iterative Solution of Viscous Supersonic Flow
1989-08-01
by b!ock number) FIELD GROUP SUB- GROUP nozzle analysis, Navier-Stokes, turbulent flow, equilibrium S 20 04 chemistry 19. ABSTRACT (Continue on reverse... quasi -conservative formulations lead to unacrepilably large mass conservation errors. Along with the investigations of Navier-Stkes algorithins...Characteristics Splitting ................................... 125 4.2.3 Non -Iterative PNS Procedure ............................... 125 4.2.4 Comparisons of
Acoustic scattering by arbitrary distributions of disjoint, homogeneous cylinders or spheres.
Hesford, Andrew J; Astheimer, Jeffrey P; Waag, Robert C
2010-05-01
A T-matrix formulation is presented to compute acoustic scattering from arbitrary, disjoint distributions of cylinders or spheres, each with arbitrary, uniform acoustic properties. The generalized approach exploits the similarities in these scattering problems to present a single system of equations that is easily specialized to cylindrical or spherical scatterers. By employing field expansions based on orthogonal harmonic functions, continuity of pressure and normal particle velocity are directly enforced at each scatterer using diagonal, analytic expressions to eliminate the need for integral equations. The effect of a cylinder or sphere that encloses all other scatterers is simulated with an outer iterative procedure that decouples the inner-object solution from the effect of the enclosing object to improve computational efficiency when interactions among the interior objects are significant. Numerical results establish the validity and efficiency of the outer iteration procedure for nested objects. Two- and three-dimensional methods that employ this outer iteration are used to measure and characterize the accuracy of two-dimensional approximations to three-dimensional scattering of elevation-focused beams.
Design and long-term monitoring of DSC/CIGS tandem solar module
NASA Astrophysics Data System (ADS)
Vildanova, M. F.; Nikolskaia, A. B.; Kozlov, S. S.; Shevaleevskiy, O. I.
2015-11-01
This paper describes the design and development of tandem dye-sensitized/Cu(In, Ga)Se (DSC/CIGS) PV modules. The tandem PV module comprised of the top DSC module and a bottom commercial 0,8 m2 CIGS module. The top DSC module was made of 10 DSC mini-modules with the field size of 20 × 20 cm2 each. Tandem DSC/CIGS PV modules were used for providing the long-term monitoring of energy yield and electrical parameters in comparison with standalone CIGS modules under outdoor conditions. The outdoor test facility, containing solar modules of both types and a measurement unit, was located on the roof of the Institute of Biochemical Physics in Moscow. The data obtained during monitoring within the 2014 year period has shown the advantages of the designed tandem DSC/CIGS PV-modules over the conventional CIGS modules, especially for cloudy weather and low-intensity irradiation conditions.
Mani, Narasimhan; Park, M O; Jun, H W
2005-01-01
Sustained-release wax microspheres of guaifenesin, a highly water-soluble drug, were prepared by the hydrophobic congealable disperse method using a salting-out procedure. The effects of formulation variables on the loading efficiency, particle properties, and in-vitro drug release from the microspheres were determined. The type of dispersant, the amount of wetting agent, and initial stirring time used affected the loading efficiency, while the volume of external phase and emulsification speed affected the particle size of the microspheres to a greater extent. The crystal properties of the drug in the wax matrix and the morphology of the microspheres were studied by differential scanning calorimetry (DSC), powder x-ray diffraction (XRD), and scanning electron microscopy (SEM). The DSC thermograms of the microspheres showed that the drug lost its crystallinity during the microencapsulation process, which was further confirmed by the XRD data. The electron micrographs of the drug-loaded microspheres showed well-formed spherical particles with a rough exterior.
Effect of radiation induced crosslinking and degradation of ETFE films
NASA Astrophysics Data System (ADS)
Zen, H. A.; Ribeiro, G.; Geraldes, A. N.; Souza, C. P.; Parra, D. F.; Lugão, A. B.
2013-03-01
In this study the ETFE film with 125 μm of thickness was placed inside a nylon bag and filled with either acetylene, nitrogen or oxygen. Following the procedure, the samples were irradiated at 5, 10 and 20 kGy. The physical and chemical properties of the modified and pristine films were evaluated by rheological and thermal analyses (TG and DSC), X-ray diffraction (XRD) and infrared spectroscopy (IR-ATR). In rheological analysis the storage modulus (G') indicates opposite profiles when the atmospheres (acetylene and oxygen) are evaluated according to the absorbed dose. For the samples submitted to radiation under oxygen atmosphere it is possible to observe the degradation process with the low levels of the storage modulus. The changes in the degree of crystallinity were verified in all modified samples when compared to the pristine polymer and this behavior was confirmed by DSC analysis. A decrease in the intensity of crystalline peak by X-ray diffraction was observed.
Czochralski growth of LaPd2Al2 single crystals
NASA Astrophysics Data System (ADS)
Doležal, P.; Rudajevová, A.; Vlášková, K.; Kriegner, D.; Václavová, K.; Prchal, J.; Javorský, P.
2017-10-01
The present study is focused on the preparation of single crystalline LaPd2Al2 by the Czochralski method. Differential scanning calorimetry (DSC) and energy dispersive X-ray spectroscopy (EDX) analyses reveal that LaPd2Al2 is an incongruently melting phase which causes difficulties for the preparation of single crystalline LaPd2Al2 by the Czochralski method. Therefore several non-stoichiometric polycrystalline samples were studied for its preparation. Finally the successful growth of LaPd2Al2 without foreign phases has been achieved by using a non-stoichiometric precursor with atomic composition 22:39:39 (La:Pd:Al). X-ray powder diffraction, EDX analysis and DSC were used for the characterisation. A single crystalline sample was separated from the ingot prepared by the Czochralski method using the non-stoichiometric precursor. The presented procedure for the preparation of pure single phase LaPd2Al2 could be modified for other incongruently melting phases.
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2017-09-01
Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.
Distinct roles of the DmNav and DSC1 channels in the action of DDT and pyrethroids.
Rinkevich, Frank D; Du, Yuzhe; Tolinski, Josh; Ueda, Atsushi; Wu, Chun-Fang; Zhorov, Boris S; Dong, Ke
2015-03-01
Voltage-gated sodium channels (Nav channels) are critical for electrical signaling in the nervous system and are the primary targets of the insecticides DDT and pyrethroids. In Drosophila melanogaster, besides the canonical Nav channel, Para (also called DmNav), there is a sodium channel-like cation channel called DSC1 (Drosophila sodium channel 1). Temperature-sensitive paralytic mutations in DmNav (para(ts)) confer resistance to DDT and pyrethroids, whereas DSC1 knockout flies exhibit enhanced sensitivity to pyrethroids. To further define the roles and interaction of DmNav and DSC1 channels in DDT and pyrethroid neurotoxicology, we generated a DmNav/DSC1 double mutant line by introducing a para(ts1) allele (carrying the I265N mutation) into a DSC1 knockout line. We confirmed that the I265N mutation reduced the sensitivity to two pyrethroids, permethrin and deltamethrin of a DmNav variant expressed in Xenopus oocytes. Computer modeling predicts that the I265N mutation confers pyrethroid resistance by allosterically altering the second pyrethroid receptor site on the DmNav channel. Furthermore, we found that I265N-mediated pyrethroid resistance in para(ts1) mutant flies was almost completely abolished in para(ts1);DSC1(-/-) double mutant flies. Unexpectedly, however, the DSC1 knockout flies were less sensitive to DDT, compared to the control flies (w(1118A)), and the para(ts1);DSC1(-/-) double mutant flies were even more resistant to DDT compared to the DSC1 knockout or para(ts1) mutant. Our findings revealed distinct roles of the DmNav and DSC1 channels in the neurotoxicology of DDT vs. pyrethroids and implicate the exciting possibility of using DSC1 channel blockers or modifiers in the management of pyrethroid resistance. Copyright © 2015 Elsevier Inc. All rights reserved.
Leno-Durán, E; Ruiz-Magaña, M J; Muñoz-Fernández, R; Requena, F; Olivares, E G; Ruiz-Ruiz, C
2014-10-10
Is there a relationship between decidualization and apoptosis of decidual stromal cells (DSC)? Decidualization triggers the secretion of soluble factors that induce apoptosis in DSC. The differentiation and apoptosis of DSC during decidualization of the receptive decidua are crucial processes for the controlled invasion of trophoblasts in normal pregnancy. Most DSC regress in a time-dependent manner, and their removal is important to provide space for the embryo to grow. However, the mechanism that controls DSC death is poorly understood. The apoptotic response of DSC was analyzed after exposure to different exogenous agents and during decidualization. The apoptotic potential of decidualized DSC supernatants and prolactin (PRL) was also evaluated. DSC lines were established from samples of decidua from first trimester pregnancies. Apoptosis was assayed by flow cytometry. PRL production, as a marker of decidualization, was determined by enzyme-linked immunosorbent assay. DSCs were resistant to a variety of apoptosis-inducing substances. Nevertheless, DSC underwent apoptosis during decidualization in culture, with cAMP being essential for both apoptosis and differentiation. In addition, culture supernatants from decidualized DSC induced apoptosis in undifferentiated DSC, although paradoxically these supernatants decreased the spontaneous apoptosis of decidual lymphocytes. Exogenously added PRL did not induce apoptosis in DSC and an antibody that neutralized the PRL receptor did not decrease the apoptosis induced by supernatants. Further studies are needed to examine the involvement of other soluble factors secreted by decidualized DSC in the induction of apoptosis. The present results indicate that apoptosis of DSC occurs in parallel to differentiation, in response to decidualization signals, with soluble factors secreted by decidualized DSC being responsible for triggering cell death. These studies are relevant in the understanding of how the regression of decidua, a crucial process for successful pregnancy, takes place. This work was supported by the Consejería de Economía, Innovación y Ciencia, Junta de Andalucía (Grant CTS-6183, Proyectos de Investigación de Excelencia 2010 to C.R.-R.) and the Instituto de Salud Carlos III, Ministerio de Economía y Competitividad, Spain (Grants PS09/00339 and PI12/01085 to E.G.O.). E.L.-D. was supported by fellowships from the Ministerio de Educación y Ciencia, Spain and the University of Granada. The authors have no conflict of interest. © The Author 2014. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Enhancement of event related potentials by iterative restoration algorithms
NASA Astrophysics Data System (ADS)
Pomalaza-Raez, Carlos A.; McGillem, Clare D.
1986-12-01
An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.
[Urologic surgical procedures in patients with uterus neoplasm and colon-rectal cancer].
Marino, G; Laudi, M; Capussotti, L; Zola, P
2008-01-01
INTRODUCTION. During the last 30 years, the multidisciplinary treatments of colon and uterus neoplasm have yielded an increase in total survival rates, fostering therefore the increase of cases with regional relapse involving the urinary tract. In these cases the iterative surgery can be performed, if no disease secondary to pelvic pain, haemostatic or debulking procedure is present, and must be considered and discussed with the patient, according to his/her general status. MATERIALS AND METHODS. From 1997 to August 2007 we performed altogether 43 pelvic iterative surgeries, with simultaneous urologic surgical procedure because of pelvic tumor relapse in patients with uterus neoplasm and colon and rectal cancer. In 4 cases of anal cancer, the urological procedure were: one radical prostatectomy with continent vesicostomy in the first case, while in the other 3 cases radical pelvectomy with double-barrelled uretero-cutaneostomy. In 23 cases of colon cancer, the urologic procedures were: 9 cases of radical cystectomy with double-barrelled uretero-cutaneostomy, 4 cases of radical cystectomy with uretero-ileo-cutaneostomy according to Bricker- Wallace II procedure, and 9 cases of partial cystectomy with pelvic ureterectomy and ureterocystoneostomy according to Lich-Gregoire technique (7 cases) and Lembo-Boari (2 cases) procedure. In 16 cases of uterus cancer, the urological procedure were: 7 cases of partial cystectomy with pelvic ureterectomy and uretero-cystoneostomy according to Lich-Gregoire procedure; in 3 cases, a radical cystectomy with urinary continent cutaneous diversion according to the Ileal T-pouch procedure; 2 cases of total pelvectomy and double uretero-cutaneostomy, and 4 cases of bilateral uretero-cutaneostomy. RESULTS. No patients died in the perioperative time; early systemic complications were: 2 esophageal candidiasis, 1 case of venous thrombosis. CONCLUSIONS. The iterative pelvic surgery in the case of oncological relapse involving the urinary tract aims to achieve the best quality of life with the utmost oncological radicality. The equation: eradication of pelvic neoplasm and urinary tract reconstruction, with acceptable quality of life, will be the future target; nevertheless, it is not possible to establish guidelines beforehand, and the therapy must be adapted to each single case.
Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities
2009-02-01
levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Mani, Ramani
2007-01-01
A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.
Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model
NASA Astrophysics Data System (ADS)
Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu
2017-04-01
In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Wang, Shujun; Zhang, Xiu; Wang, Shuo; Copeland, Les
2016-06-01
A thorough understanding of starch gelatinization is extremely important for precise control of starch functional properties for food processing and human nutrition. Here we reveal the molecular mechanism of starch gelatinization by differential scanning calorimetry (DSC) in conjunction with a protocol using the rapid viscosity analyzer (RVA) to generate material for analysis under conditions that simulated the DSC heating profiles. The results from DSC, FTIR, Raman, X-ray diffraction and small angle X-ray scattering (SAXS) analyses all showed that residual structural order remained in starch that was heated to the DSC endotherm end temperature in starch:water mixtures of 0.5 to 4:1 (v/w). We conclude from this study that the DSC endotherm of starch at a water:starch ratio of 2 to 4 (v/w) does not represent complete starch gelatinization. The DSC endotherm of starch involves not only the water uptake and swelling of amorphous regions, but also the melting of starch crystallites.
Wang, Shujun; Zhang, Xiu; Wang, Shuo; Copeland, Les
2016-01-01
A thorough understanding of starch gelatinization is extremely important for precise control of starch functional properties for food processing and human nutrition. Here we reveal the molecular mechanism of starch gelatinization by differential scanning calorimetry (DSC) in conjunction with a protocol using the rapid viscosity analyzer (RVA) to generate material for analysis under conditions that simulated the DSC heating profiles. The results from DSC, FTIR, Raman, X-ray diffraction and small angle X-ray scattering (SAXS) analyses all showed that residual structural order remained in starch that was heated to the DSC endotherm end temperature in starch:water mixtures of 0.5 to 4:1 (v/w). We conclude from this study that the DSC endotherm of starch at a water:starch ratio of 2 to 4 (v/w) does not represent complete starch gelatinization. The DSC endotherm of starch involves not only the water uptake and swelling of amorphous regions, but also the melting of starch crystallites. PMID:27319782
NASA Astrophysics Data System (ADS)
Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.
2017-09-01
The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
Iterative combining rules for the van der Waals potentials of mixed rare gas systems
NASA Astrophysics Data System (ADS)
Wei, L. M.; Li, P.; Tang, K. T.
2017-05-01
An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.
Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.
Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery
The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.
A Centered Projective Algorithm for Linear Programming
1988-02-01
zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an
MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero
NASA Astrophysics Data System (ADS)
Bogner, Christian
2016-06-01
We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.
Arisawa, Atsuko; Watanabe, Yoshiyuki; Tanaka, Hisashi; Takahashi, Hiroto; Matsuo, Chisato; Fujiwara, Takuya; Fujiwara, Masahiro; Fujimoto, Yasunori; Tomiyama, Noriyuki
2018-06-01
Arterial spin labeling (ASL) is a non-invasive perfusion technique that may be an alternative to dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) for assessment of brain tumors. To our knowledge, there have been no reports on histogram analysis of ASL. The purpose of this study was to determine whether ASL is comparable with DSC-MRI in terms of differentiating high-grade and low-grade gliomas by evaluating the histogram analysis of cerebral blood flow (CBF) in the entire tumor. Thirty-four patients with pathologically proven glioma underwent ASL and DSC-MRI. High-signal areas on contrast-enhanced T 1 -weighted images or high-intensity areas on fluid-attenuated inversion recovery images were designated as the volumes of interest (VOIs). ASL-CBF, DSC-CBF, and DSC-cerebral blood volume maps were constructed and co-registered to the VOI. Perfusion histogram analyses of the whole VOI and statistical analyses were performed to compare the ASL and DSC images. There was no significant difference in the mean values for any of the histogram metrics in both of the low-grade gliomas (n = 15) and the high-grade gliomas (n = 19). Strong correlations were seen in the 75th percentile, mean, median, and standard deviation values between the ASL and DSC images. The area under the curve values tended to be greater for the DSC images than for the ASL images. DSC-MRI is superior to ASL for distinguishing high-grade from low-grade glioma. ASL could be an alternative evaluation method when DSC-MRI cannot be used, e.g., in patients with renal failure, those in whom repeated examination is required, and in children.
Noniterative accurate algorithm for the exact exchange potential of density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinal, M.; Holas, A.
2007-10-15
An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
NASA Astrophysics Data System (ADS)
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
Preconditioned conjugate residual methods for the solution of spectral equations
NASA Technical Reports Server (NTRS)
Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.
1986-01-01
Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.
A new solution procedure for a nonlinear infinite beam equation of motion
NASA Astrophysics Data System (ADS)
Jang, T. S.
2016-10-01
Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.
2003-06-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].
Efficient and robust relaxation procedures for multi-component mixtures including phase transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de
We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less
Development of a pressure based multigrid solution method for complex fluid flows
NASA Technical Reports Server (NTRS)
Shyy, Wei
1991-01-01
In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.
Fast solution of elliptic partial differential equations using linear combinations of plane waves.
Pérez-Jordá, José M
2016-02-01
Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.
Experiments on Learning by Back Propagation.
ERIC Educational Resources Information Center
Plaut, David C.; And Others
This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate…
How good are the Garvey-Kelson predictions of nuclear masses?
NASA Astrophysics Data System (ADS)
Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.
2009-09-01
The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Ruan, Ruoxin; Chung, Kuang-Ren; Li, Hongye
2017-12-01
Sterol regulatory element binding proteins (SREBPs) are required for sterol homeostasis in eukaryotes. Activation of SREBPs is regulated by the Dsc E3 ligase complex in Schizosaccharomyces pombe and Aspergillus spp. Previous studies indicated that an SREBP-coding gene PdsreA is required for fungicide resistance and ergosterol biosynthesis in the citrus postharvest pathogen Penicillium digitatum. In this study, five genes, designated PddscA, PddscB, PddscC, PddscD, and PddscE encoding the Dsc E3 ligase complex were characterized to be required for fungicide resistance, ergosterol biosynthesis and CoCl 2 tolerance in P. digitatum. Each of the dsc genes was inactivated by target gene disruption and the resulted phenotypes were analyzed and compared. Genetic analysis reveals that, of five Dsc complex components, PddscB is the core subunit gene in P. digitatum. Although the resultant dsc mutants were able to infect citrus fruit and induce maceration lesions as the wild-type, the mutants rarely produced aerial mycelia on affected citrus fruit peels. P. digitatum Dsc proteins regulated not only the expression of genes involved in ergosterol biosynthesis but also that of PdsreA. Yeast two-hybrid assays revealed a direct interaction between the PdSreA protein and the Dsc proteins. Ectopic expression of the PdSreA N-terminus restored fungicide resistance in the dsc mutants. Our results provide important evidence to understand the mechanisms underlying SREBP activation and regulation of ergosterol biosynthesis in plant pathogenic fungi. Copyright © 2017 Elsevier GmbH. All rights reserved.
Iterative methods for plasma sheath calculations: Application to spherical probe
NASA Technical Reports Server (NTRS)
Parker, L. W.; Sullivan, E. C.
1973-01-01
The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
Iterative refinement of structure-based sequence alignments by Seed Extension
Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook
2009-01-01
Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1991-01-01
A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schröder, Florian A. Y. N.; Cole, Jacqueline M.; Waddell, Paul G.
2015-02-03
The re-functionalization of a series of four well-known industrial laser dyes, based on benzophenoxazine, is explored with the prospect of molecularly engineering new chromophores for dye-sensitized solar cell (DSC) applications. Such engineering is important since a lack of suitable dyes is stifling the progress of DSC technology. The conceptual idea involves making laser dyes DSC-active by chemical modification, while maintaining their key property attributes that are attractive to DSC applications. This molecular engineering follows a step-wise approach. Firstly, molecular structures and optical absorption properties are determined for the parent laser dyes: Cresyl Violet (1); Oxazine 170 (2); Nile Blue Amore » (3), Oxazine 750 (4). These reveal structure-property relationships which define the prerequisites for computational molecular design of DSC dyes; the nature of their molecular architecture (D-π-A) and intramolecular charge transfer. Secondly, new DSC dyes are computationally designed by the in silico addition of a carboxylic acid anchor at various chemical substitution points in the parent laser dyes. A comparison of the resulting frontier molecular orbital energy levels with the conduction band edge of a TiO2 DSC photoanode and the redox potential of two electrolyte options I-/I3- and Co(II/III)tris(bipyridyl) suggests promise for these computationally designed dyes as co-sensitizers for DSC applications.« less
Woloshin, Steve; Schwartz, Lisa M; Dejene, Sara; Rausch, Paula; Dal Pan, Gerald J; Zhou, Esther H; Kesselheim, Aaron S
2017-05-01
FDA issues Drug Safety Communications (DSCs) to alert health care professionals and the public about emerging safety information affecting prescription and over-the-counter drugs. News media may amplify DSCs, but it is unclear how DSC messaging is transmitted through the media. We conducted a content analysis of the lay media coverage reaching the broadest audience to characterize the amount and content of media coverage of two zolpidem DSCs from 2013. After the first DSC, zolpidem news stories increased from 19 stories/week in the preceding 3 months to 153 following its release. Most (81%) appeared in the lay media, and 64% focused on the DSC content. After the second DSC, news stories increased from 24 stories/week in the preceding 3 months to 39 following. Among the 100 unique lay media news stories, at least half correctly reported three key DSC messages: next-day impairment and drowsiness as common safety hazards, lower doses for some but not all zolpidem products, and women's higher risk for impairment. Other DSC messages were reported in fewer than one-third of stories, such as the warning that impairment can happen even when people feel fully awake. The first-but not the second-zolpidem DSC generated high-profile news coverage. The finding that some messages were widely reported but others were not emphasizes the importance of ensuring translation of key DSC content.
Johnson, Jason N.; Jaggers, James; Li, Shuang; O’Brien, Sean M.; Li, Jennifer S.; Jacobs, Jeffrey P.; Jacobs, Marshall L.; Welke, Karl F.; Peterson, Eric D.; Pasquali, Sara K.
2009-01-01
Objectives There is debate whether primary or delayed sternal closure (DSC) is the best strategy following Stage 1 palliation (S1P) for hypoplastic left heart syndrome (HLHS). We describe center variation in DSC following S1P and associated outcomes. Methods Society of Thoracic Surgeons Congenital Database participants performing S1P for HLHS from 2000–2007 were included. We examined center variation in DSC, and compared in-hospital mortality, prolonged length of stay (LOS >6wks), and postoperative infection in centers with low (≤25% of cases), middle (26%–74% of cases), and high (≥75% of cases) DSC utilization, adjusting for patient and center factors. Results There were 1283 patients (45 centers) included. Median age and weight at surgery were 6d (IQR4-9d) and 3.2 kg (IQR2.8–3.5kg); 59% were male. DSC was used in 74% (range 3–100% of cases/center). In centers with high (n=23) and middle (n=17) vs. low (n=5) DSC utilization, there was a greater proportion of patients with prolonged LOS and infection, and a trend toward increased in-hospital mortality in unadjusted analysis. In multivariable analysis, there was no difference in mortality. Centers with high and middle DSC utilization had prolonged LOS [OR (95%CI): 2.83(1.46–5.47) p=0.002 and 2.23(1.17–4.26) p=0.02] and more infection [2.34(1.20–4.57) p=0.01 and 2.37(1.36–4.16) p=0.003]. Conclusions Utilization of DSC following S1P varies widely. These observational data suggest more frequent use of DSC is associated with longer LOS and higher postoperative infection rates. Further evaluation of the risks and benefits of DSC in the management of these complex infants is necessary. PMID:20167337
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Tziavos, Ilias N; Alexandridis, Thomas K; Aleksandrov, Borys; Andrianopoulos, Agamemnon; Doukas, Ioannis D; Grigoras, Ion; Grigoriadis, Vassilios N; Papadopoulou, Ioanna D; Savvaidis, Paraskevas; Stergioudis, Argyrios; Teodorof, Liliana; Vergos, Georgios S; Vorobyova, Lyudmila; Zalidis, Georgios C
2016-08-01
In this paper, the development of a Web-based GIS system for the monitoring and assessment of the Black Sea is presented. The integrated multilevel system is based on the combination of terrestrial and satellite Earth observation data through the technological assets provided by innovative information tools and facilities. The key component of the system is a unified, easy to update geodatabase including a wide range of appropriately selected environmental parameters. The collection procedure of current and historical data along with the methods employed for their processing in three test areas of the current study are extensively discussed, and special attention is given to the overall design and structure of the developed geodatabase. Furthermore, the information system includes a decision support component (DSC) which allows assessment and effective management of a wide range of heterogeneous data and environmental parameters within an appropriately designed and well-tested methodology. The DSC provides simplified and straightforward results based on a classification procedure, thus contributing to a monitoring system not only for experts but for auxiliary staff as well. The examples of the system's functionality that are presented highlight its usability as well as the assistance that is provided to the decision maker. The given examples emphasize on the Danube Delta area; however, the information layers of the integrated system can be expanded in the future to cover other regions, thus contributing to the development of an environmental monitoring system for the entire Black Sea.
Parabolized Navier-Stokes solutions of separation and trailing-edge flows
NASA Technical Reports Server (NTRS)
Brown, J. L.
1983-01-01
A robust, iterative solution procedure is presented for the parabolized Navier-Stokes or higher order boundary layer equations as applied to subsonic viscous-inviscid interaction flows. The robustness of the present procedure is due, in part, to an improved algorithmic formulation. The present formulation is based on a reinterpretation of stability requirements for this class of algorithms and requires only second order accurate backward or central differences for all streamwise derivatives. Upstream influence is provided for through the algorithmic formulation and iterative sweeps in x. The primary contribution to robustness, however, is the boundary condition treatment, which imposes global constraints to control the convergence path. Discussed are successful calculations of subsonic, strong viscous-inviscid interactions, including separation. These results are consistent with Navier-Stokes solutions and triple deck theory.
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
Pan, Xiaohong; Julian, Thomas; Augsburger, Larry
2006-02-10
Differential scanning calorimetry (DSC) and X-ray powder diffractometry (XRPD) methods were developed for the quantitative analysis of the crystallinity of indomethacin (IMC) in IMC and silica gel (SG) binary system. The DSC calibration curve exhibited better linearity than that of XRPD. No phase transformation occurred in the IMC-SG mixtures during DSC measurement. The major sources of error in DSC measurements were inhomogeneous mixing and sampling. Analyzing the amount of IMC in the mixtures using high-performance liquid chromatography (HPLC) could reduce the sampling error. DSC demonstrated greater sensitivity and had less variation in measurement than XRPD in quantifying crystalline IMC in the IMC-SG binary system.
Improved pressure-velocity coupling algorithm based on minimization of global residual norm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatwani, A.U.; Turan, A.
1991-01-01
In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.
Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics
NASA Technical Reports Server (NTRS)
Seginer, A.; Rusak, Z.; Wasserstrom, E.
1983-01-01
Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Gelatinisation kinetics of corn and chickpea starches using DSC, RVA, and dynamic rheometry
USDA-ARS?s Scientific Manuscript database
The gelatinisation kinetics (non-isothermal) of corn and chickpea starches at different heating rates were calculated using differential scanning calorimetry (DSC), rapid visco analyser (RVA), and oscillatory dynamic rheometry. The data obtained from the DSC thermogram and the RVA profiles were fitt...
47 CFR 80.225 - Requirements for selective calling equipment.
Code of Federal Regulations, 2011 CFR
2011-10-01
... selective calling (DSC) equipment and selective calling equipment installed in ship and coast stations, and...-STD, “RTCM Recommended Minimum Standards for Digital Selective Calling (DSC) Equipment Providing... Class ‘D’ Digital Selective Calling (DSC)—Methods of testing and required test results,” March 2003. ITU...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Frequencies for digital selective calling (DSC... for digital selective calling (DSC). (a) General purpose calling. The following table describes the... Digital Selective-Calling Equipment in the Maritime Mobile Service,” with Annexes 1 through 5, 2004, and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1981-08-01
Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-01
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-28
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
NASA Astrophysics Data System (ADS)
Sham, Atiyah W. M.; Monsi, Mansor; Hassan, Nasruddin; Suleiman, Mohamed
2013-04-01
The aim of this paper is to present a new modified interval symmetric single-step procedure ISS-5D which is the extension from the previous procedure, ISS1. The ISS-5D method will produce successively smaller intervals that are guaranteed to still contain the zeros. The efficiency of this method is measured on the CPU times and the number of iteration. The procedure is run on five test polynomials and the results obtained are shown in this paper.
van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap
2009-01-01
Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427
Dobrikova, Anelia G; Várkonyi, Zsuzsanna; Krumova, Sashka B; Kovács, László; Kostov, Georgi K; Todinova, Svetla J; Busheva, Mira C; Taneva, Stefka G; Garab, Gyozo
2003-09-30
The thermo-optic mechanism in thylakoid membranes was earlier identified by measuring the thermal and light stabilities of pigment arrays with different levels of structural complexity [Cseh, Z., et al. (2000) Biochemistry 39, 15250-15257]. (According to the thermo-optic mechanism, fast local thermal transients, arising from the dissipation of excess, photosynthetically not used, excitation energy, induce elementary structural changes due to the "built-in" thermal instabilities of the given structural units.) The same mechanism was found to be responsible for the light-induced trimer-to-monomer transition in LHCII, the main chlorophyll a/b light-harvesting antenna of photosystem II (PSII) [Garab, G., et al. (2002) Biochemistry 41, 15121-15129]. In this paper, differential scanning calorimetry (DSC) and circular dichroism (CD) spectroscopy on thylakoid membranes of barley and pea are used to correlate the thermo-optically inducible structural changes with well-discernible calorimetric transitions. The thylakoid membranes exhibited six major DSC bands, with maxima between about 43 and 87 degrees C. The heat sorption curves were analyzed both by mathematical deconvolution of the overall endotherm and by a successive annealing procedure; these yielded similar thermodynamic parameters, transition temperature and calorimetric enthalpy. A systematic comparison of the DSC and CD data on samples with different levels of complexity revealed that the heat-induced disassembly of chirally organized macrodomains contributes profoundly to the first endothermic event, a weak and broad DSC band between 43 and 48 degrees C. Similarly to the main macrodomain-associated CD signals, this low enthalpy band could be diminished by prolonged photoinhibitory preillumination, the extent of which depended on the temperature of preillumination. By means of nondenaturing, "green" gel electrophoresis and CD fingerprinting, it is shown that the second main endotherm, around 60 degrees C, originates to a large extent from the monomerization of LHCII trimers. The main DSC band, around 70 degrees C, which exhibits the highest enthalpy change, and another band around 75-77 degrees C relate to the dismantling of LHCII and other pigment-protein complexes, which under physiologically relevant conditions cannot be induced by light. The currently available data suggest the following sequence of events of thermo-optically inducible changes: (i) unstacking of membranes, followed by (ii) lateral disassembly of the chiral macrodomains and (iii) monomerization of LHCII trimers. We propose that thermo-optical structural reorganizations provide a structural flexibility, which is proportional to the intensity of the excess excitation, while for their localized nature, the structural stability of the system can be retained.
Obaisi, Noor Aminah; Galang-Boquiren, Maria Therese S; Evans, Carla A; Tsay, Tzong Guang Peter; Viana, Grace; Berzins, David; Megremis, Spiro
2016-07-01
The purpose of this study was to investigate the suitability of the Bend and Free Recovery (BFR) method as a standard test method to determine the transformation temperatures of heat-activated Ni-Ti orthodontic archwires. This was done by determining the transformation temperatures of two brands of heat-activated Ni-Ti orthodontic archwires using the both the BFR method and the standard method of Differential Scanning Calorimetry (DSC). The values obtained from the two methods were compared with each other and to the manufacturer-listed values. Forty heat-activated Ni-Ti archwires from both Rocky Mountain Orthodontics (RMO) and Opal Orthodontics (Opal) were tested using BFR and DSC. Round (0.016 inches) and rectangular (0.019×0.025 inches) archwires from each manufacturer were tested. The austenite start temperatures (As) and austenite finish temperatures (Af) were recorded. For four of the eight test groups, the BFR method resulted in lower standard deviations than the DSC method, and, overall, the average standard deviation for BFR testing was slightly lower than for DSC testing. Statistically significant differences were seen between the transformation temperatures obtained from the BFR and DSC test methods. However, the Af temperatures obtained from the two methods were remarkably similar with the mean differences ranging from 0.0 to 2.1°C: Af Opal round (BFR 26.7°C, DSC 27.6°C) and rectangular (BFR 27.6°C, DSC 28.6°C); Af RMO round (BFR 25.5°C, DSC 25.5°C) and rectangular (BFR 28.0°C, DSC 25.9°C). Significant differences were observed between the manufacturer-listed transformation temperatures and those obtained with BFR and DSC testing for both manufacturers. The results of this study suggest that the Bend and Free Recovery method is suitable as a standard method to evaluate the transformation temperatures of heat-activated Ni-Ti orthodontic archwires. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
A Simple Classroom Simulation of Heat Energy Diffusing through a Metal Bar
ERIC Educational Resources Information Center
Kinsler, Mark; Kinzel, Evelyn
2007-01-01
We present an iterative procedure that does not rely on calculus to model heat flow through a uniform bar of metal and thus avoids the use of the partial differential equation typically needed to describe heat diffusion. The procedure is based on first principles and can be done with students at the blackboard. It results in a plot that…
Simmat, I; Georg, P; Georg, D; Birkfellner, W; Goldner, G; Stock, M
2012-09-01
The goal of the current study was to evaluate the commercially available atlas-based autosegmentation software for clinical use in prostate radiotherapy. The accuracy was benchmarked against interobserver variability. A total of 20 planning computed tomographs (CTs) and 10 cone-beam CTs (CBCTs) were selected for prostate, rectum, and bladder delineation. The images varied regarding to individual (age, body mass index) and setup parameters (contrast agent, rectal balloon, implanted markers). Automatically created contours with ABAS(®) and iPlan(®) were compared to an expert's delineation by calculating the Dice similarity coefficient (DSC) and conformity index. Demo-atlases of both systems showed different results for bladder (DSC(ABAS) 0.86 ± 0.17, DSC(iPlan) 0.51 ± 0.30) and prostate (DSC(ABAS) 0.71 ± 0.14, DSC(iPlan) 0.57 ± 0.19). Rectum delineation (DSC(ABAS) 0.78 ± 0.11, DSC(iPlan) 0.84 ± 0.08) demonstrated differences between the systems but better correlation of the automatically drawn volumes. ABAS(®) was closest to the interobserver benchmark. Autosegmentation with iPlan(®), ABAS(®) and manual segmentation took 0.5, 4 and 15-20 min, respectively. Automatic contouring on CBCT showed high dependence on image quality (DSC bladder 0.54, rectum 0.42, prostate 0.34). For clinical routine, efforts are still necessary to either redesign algorithms implemented in autosegmentation or to optimize image quality for CBCT to guarantee required accuracy and time savings for adaptive radiotherapy.
7 CFR 1710.114 - TIER, DSC, OTIER and ODSC requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false TIER, DSC, OTIER and ODSC requirements. 1710.114... AND GUARANTEES Loan Purposes and Basic Policies § 1710.114 TIER, DSC, OTIER and ODSC requirements. (a) General. Requirements for coverage ratios are set forth in the borrower's mortgage, loan contract, or...
NASA Technical Reports Server (NTRS)
Neveu, M. C.; Stocker, D. P.
1985-01-01
High pressure differential scanning calorimetry (DSC) was studied as an alternate method for performing high temperature fuel thermal stability research. The DSC was used to measure the heat of reaction versus temperature of a fuel sample heated at a programmed rate in an oxygen pressurized cell. Pure hydrocarbons and model fuels were studied using typical DSC operating conditions of 600 psig of oxygen and a temperature range from ambient to 500 C. The DSC oxidation onset temperature was determined and was used to rate the fuels on thermal stability. Kinetic rate constants were determined for the global initial oxidation reaction. Fuel deposit formation is measured, and the high temperature volatility of some tetralin deposits is studied by thermogravimetric analysis. Gas chromatography and mass spectrometry are used to study the chemical composition of some DSC stressed fuels.
Hiremath, S B; Muraleedharan, A; Kumar, S; Nagesh, C; Kesavadas, C; Abraham, M; Kapilamoorthy, T R; Thomas, B
2017-04-01
Tumefactive demyelinating lesions with atypical features can mimic high-grade gliomas on conventional imaging sequences. The aim of this study was to assess the role of conventional imaging, DTI metrics ( p:q tensor decomposition), and DSC perfusion in differentiating tumefactive demyelinating lesions and high-grade gliomas. Fourteen patients with tumefactive demyelinating lesions and 21 patients with high-grade gliomas underwent brain MR imaging with conventional, DTI, and DSC perfusion imaging. Imaging sequences were assessed for differentiation of the lesions. DTI metrics in the enhancing areas and perilesional hyperintensity were obtained by ROI analysis, and the relative CBV values in enhancing areas were calculated on DSC perfusion imaging. Conventional imaging sequences had a sensitivity of 80.9% and specificity of 57.1% in differentiating high-grade gliomas ( P = .049) from tumefactive demyelinating lesions. DTI metrics ( p : q tensor decomposition) and DSC perfusion demonstrated a statistically significant difference in the mean values of ADC, the isotropic component of the diffusion tensor, the anisotropic component of the diffusion tensor, the total magnitude of the diffusion tensor, and rCBV among enhancing portions in tumefactive demyelinating lesions and high-grade gliomas ( P ≤ .02), with the highest specificity for ADC, the anisotropic component of the diffusion tensor, and relative CBV (92.9%). Mean fractional anisotropy values showed no significant statistical difference between tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI and DSC parameters improved the diagnostic accuracy (area under the curve = 0.901). Addition of a heterogeneous enhancement pattern to DTI and DSC parameters improved it further (area under the curve = 0.966). The sensitivity increased from 71.4% to 85.7% after the addition of the enhancement pattern. DTI and DSC perfusion add profoundly to conventional imaging in differentiating tumefactive demyelinating lesions and high-grade gliomas. The combination of DTI metrics and DSC perfusion markedly improved diagnostic accuracy. © 2017 by American Journal of Neuroradiology.
Faroongsarng, Damrongsak
2016-06-01
Although differential scanning calorimetry (DSC) is a non-equilibrium technique, it has been used to gain energetic information that involves phase equilibria. DSC has been widely used to characterize the equilibrium melting parameters of small organic pharmaceutical compounds. An understanding of how DSC measures an equilibrium event could make for a better interpretation of the results. The aim of this mini-review was to provide a theoretical insight into the DSC measurement to obtain the equilibrium thermodynamics of a phase transition especially the melting process. It was demonstrated that the heat quantity obtained from the DSC thermogram (ΔH) was related to the thermodynamic enthalpy of the phase transition (ΔH (P) ) via: ΔH = ΔH (P) /(1 + K (- 1)) where K was the equilibrium constant. In melting, the solid and liquefied phases presumably coexist resulting in a null Gibbs free energy that produces an infinitely larger K. Thus, ΔH could be interpreted as ΔH (P). Issues of DSC investigations on melting behavior of crystalline solids including polymorphism, degradation impurity due to heating in situ, and eutectic melting were discussed. In addition, DSC has been a tool for determination of the impurity based on an ideal solution of the melt that is one of the official methods used to establish the reference standard.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
An iterative requirements specification procedure for decision support systems.
Brookes, C H
1987-08-01
Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.
47 CFR 80.1087 - Ship radio equipment-Sea area A1.
Code of Federal Regulations, 2013 CFR
2013-10-01
... an INMARSAT ship earth station capable of two way communication. (b) The VHF radio installation... which the ship is normally navigated, operating either: (1) On VHF using DSC; or (2) Through the polar... voyages within coverage of MF coast stations equipped with DSC; or (4) On HF using DSC; or (5) Through the...
47 CFR 80.1087 - Ship radio equipment-Sea area A1.
Code of Federal Regulations, 2014 CFR
2014-10-01
... an INMARSAT ship earth station capable of two way communication. (b) The VHF radio installation... which the ship is normally navigated, operating either: (1) On VHF using DSC; or (2) Through the polar... voyages within coverage of MF coast stations equipped with DSC; or (4) On HF using DSC; or (5) Through the...
47 CFR 80.1087 - Ship radio equipment-Sea area A1.
Code of Federal Regulations, 2012 CFR
2012-10-01
... an INMARSAT ship earth station capable of two way communication. (b) The VHF radio installation... which the ship is normally navigated, operating either: (1) On VHF using DSC; or (2) Through the polar... voyages within coverage of MF coast stations equipped with DSC; or (4) On HF using DSC; or (5) Through the...
Restoration of multichannel microwave radiometric images
NASA Technical Reports Server (NTRS)
Chin, R. T.; Yeh, C. L.; Olson, W. S.
1983-01-01
A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
40 CFR 230.5 - General procedures to be followed.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Evaluation and Testing (§ 230.61). (j) Identify appropriate and practicable changes to the project plan to... of illustration. The actual process followed may be iterative, with the results of one step leading...
40 CFR 230.5 - General procedures to be followed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Evaluation and Testing (§ 230.61). (j) Identify appropriate and practicable changes to the project plan to... of illustration. The actual process followed may be iterative, with the results of one step leading...
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Gahramanov, Seymur; Raslan, Ahmed; Muldoon, Leslie L.; Hamilton, Bronwyn E.; Rooney, William D.; Varallyay, Csanad G.; Njus, Jeffrey M.; Haluska, Marianne; Neuwelt, Edward A.
2010-01-01
Purpose We evaluated dynamic susceptibility-weighted contrast-enhanced magnetic resonance imaging (DSC-MRI) using gadoteridol in comparison to the iron oxide nanoparticle blood pool agent, ferumoxytol in patients with glioblastoma multiforme (GBM) who received standard radiochemotherapy (RCT). Methods and Materials Fourteen patients with GBM received standard RCT and underwent 19 MRI sessions that included DSC-MRI acquisitions with gadoteridol on day 1 and ferumoxytol on day 2. Relative cerebral blood volume (rCBV) values were calculated from DSC data obtained from each contrast agent. T1-weighted acquisition post-gadoteridol administration was used to identify enhancing regions. Results In 7 MRI sessions of clinically presumptive active tumor, gadoteridol-DSC showed low rCBV in 3 and high rCBV in 4, while ferumoxytol-DSC showed high rCBV in all 7 sessions (p=0.002). After RCT, 7 MRI sessions showed increased gadoteridol contrast enhancement on T1-weighted scans coupled with low rCBV without significant differences between contrast agents (p=0.9). Based on post-gadoteridol T1-weighted scans, DSC-MRI, and clinical presentation four patterns of response to RCT were observed: 1) regression, 2) pseudoprogression, 3) true progression, and 4) mixed response. Conclusion We conclude that DSC-MRI with a blood-pool agent such as ferumoxytol may provide a better monitor of tumor rCBV than DSC-MRI with gadoteridol. Lesions demonstrating increased enhancement on T1-weighted MRI coupled with low ferumoxytol rCBV, are likely exhibiting pseudoprogression, while high rCBV with ferumoxytol is a better marker than gadoteridol for determining active tumor. These interesting pilot observations suggest that ferumoxytol may differentiate tumor progression from pseudoprogression, and warrant further investigation. PMID:20395065
Multi-atlas segmentation enables robust multi-contrast MRI spleen segmentation for splenomegaly
NASA Astrophysics Data System (ADS)
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L.; Assad, Albert; Abramson, Richard G.; Landman, Bennett A.
2017-02-01
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly.
Huo, Yuankai; Liu, Jiaqi; Xu, Zhoubing; Harrigan, Robert L; Assad, Albert; Abramson, Richard G; Landman, Bennett A
2017-02-11
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
NASA Astrophysics Data System (ADS)
Inakazu, Fumi; Noma, Yusuke; Ogomi, Yuhei; Hayase, Shuzi
2008-09-01
Dye-sensitized solar cells (DSCs) containing dye-bilayer structure of black dye and NK3705 (3-carboxymethyl-5-[3-(4-sulfobutyl)-2(3H)-bezothiazolylidene]-2-thioxo-4-thiazolidinone, sodium salt) in one TiO2 layer (2-TiO-BD-NK) are reported. The 2-TiO-BD-NK structure was fabricated by staining one TiO2 layer with these two dyes, step by step, under a pressurized CO2 condition. The dye-bilayer structure was observed by using a confocal laser scanning microscope. The short circuit current (Jsc) and the incident photon to current efficiency of the cell (DSC-2-TiO-BD-NK) was almost the sum of those of DSC stained with black dye only (DSC-1-TiO-BD) and DSC stained with NK3705 only (DSC-1-TiO-NK).
Thermal decomposition of ammonium perchlorate in the presence of Al(OH)(3)·Cr(OH)(3) nanoparticles.
Zhang, WenJing; Li, Ping; Xu, HongBin; Sun, Randi; Qing, Penghui; Zhang, Yi
2014-03-15
An Al(OH)(3)·Cr(OH)(3) nanoparticle preparation procedure and its catalytic effect and mechanism on thermal decomposition of ammonium perchlorate (AP) were investigated using transmission electron microscopy (TEM), X-ray diffraction (XRD), thermogravimetric analysis and differential scanning calorimetry (TG-DSC), X-ray photoelectron spectroscopy (XPS), and thermogravimetric analysis and mass spectroscopy (TG-MS). In the preparation procedure, TEM, SAED, and FT-IR showed that the Al(OH)(3)·Cr(OH)(3) particles were amorphous particles with dimensions in the nanometer size regime containing a large amount of surface hydroxyl under the controllable preparation conditions. When the Al(OH)(3)·Cr(OH)(3) nanoparticles were used as additives for the thermal decomposition of AP, the TG-DSC results showed that the addition of Al(OH)(3)·Cr(OH)(3) nanoparticles to AP remarkably decreased the onset temperature of AP decomposition from approximately 450°C to 245°C. The FT-IR, RS and XPS results confirmed that the surface hydroxyl content of the Al(OH)(3)·Cr(OH)(3) nanoparticles decreased from 67.94% to 63.65%, and Al(OH)3·Cr(OH)3 nanoparticles were limitedly transformed from amorphous to crystalline after used as additives for the thermal decomposition of AP. Such behavior of Al(OH)(3)·Cr(OH)(3) nanoparticles promoted the oxidation of NH3 of AP to decompose to N2O first, as indicated by the TG-MS results, accelerating the AP thermal decomposition. Copyright © 2014 Elsevier B.V. All rights reserved.
Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ro
2016-08-15
Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as amore » target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.« less
ERIC Educational Resources Information Center
Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.
2008-01-01
Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…
Low-authority control synthesis for large space structures
NASA Technical Reports Server (NTRS)
Aubrun, J. N.; Margulies, G.
1982-01-01
The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.
Zhang, Xiao C; Bermudez, Ana M; Reddy, Pranav M; Sarpatwari, Ravi R; Chheng, Darin B; Mezoian, Taylor J; Schwartz, Victoria R; Simmons, Quinneil J; Jay, Gregory D; Kobayashi, Leo
2017-03-01
A stable and readily accessible work surface for bedside medical procedures represents a valuable tool for acute care providers. In emergency department (ED) settings, the design and implementation of traditional Mayo stands and related surface devices often limit their availability, portability, and usability, which can lead to suboptimal clinical practice conditions that may affect the safe and effective performance of medical procedures and delivery of patient care. We designed and built a novel, open-source, portable, bedside procedural surface through an iterative development process with use testing in simulated and live clinical environments. The procedural surface development project was conducted between October 2014 and June 2016 at an academic referral hospital and its affiliated simulation facility. An interdisciplinary team of emergency physicians, mechanical engineers, medical students, and design students sought to construct a prototype bedside procedural surface out of off-the-shelf hardware during a collaborative university course on health care design. After determination of end-user needs and core design requirements, multiple prototypes were fabricated and iteratively modified, with early variants featuring undermattress stabilizing supports or ratcheting clamp mechanisms. Versions 1 through 4 underwent 2 hands-on usability-testing simulation sessions; version 5 was presented at a design critique held jointly by a panel of clinical and industrial design faculty for expert feedback. Responding to select feedback elements over several surface versions, investigators arrived at a near-final prototype design for fabrication and use testing in a live clinical setting. This experimental procedural surface (version 8) was constructed and then deployed for controlled usability testing against the standard Mayo stands in use at the study site ED. Clinical providers working in the ED who opted to participate in the study were provided with the prototype surface and just-in-time training on its use when performing bedside procedures. Subjects completed the validated 10-point System Usability Scale postshift for the surface that they had used. The study protocol was approved by the institutional review board. Multiple prototypes and recursive design revisions resulted in a fully functional, portable, and durable bedside procedural surface that featured a stainless steel tray and intuitive hook-and-lock mechanisms for attachment to ED stretcher bed rails. Forty-two control and 40 experimental group subjects participated and completed questionnaires. The median System Usability Scale score (out of 100; higher scores associated with better usability) was 72.5 (interquartile range [IQR] 51.3 to 86.3) for the Mayo stand; the experimental surface was scored at 93.8 (IQR 84.4 to 97.5 for a difference in medians of 17.5 (95% confidence interval 10 to 27.5). Subjects reported several usability challenges with the Mayo stand; the experimental surface was reviewed as easy to use, simple, and functional. In accordance with experimental live environment deployment, questionnaire responses, and end-user suggestions, the project team finalized the design specification for the experimental procedural surface for open dissemination. An iterative, interdisciplinary approach was used to generate, evaluate, revise, and finalize the design specification for a new procedural surface that met all core end-user requirements. The final surface design was evaluated favorably on a validated usability tool against Mayo stands when use tested in simulated and live clinical settings. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Noordin, Mohamed I; Chung, L Y
2004-01-01
This study adopts Differential Scanning Calorimetry (DSC) to analyze the thermal properties of samples (2.5-4.0 mg) from the tip, middle, and base sections of individual paracetamol suppositories, which were sampled carefully using a stainless steel scalpel. The contents of paracetamol present in the samples obtained from these sections were determined from the enthalpies of fusion of paracetamol and expressed as % w/w paracetamol to allow comparison of the amount of paracetamol found in each section. The tip, middle, and base sections contained 10.1+/-0.2%, 10.1+/-0.2%, and 10.3+/-0.2% w/w paracetamol, and are statistically similar (One-way anova; p>0.05). This indicates that the preparation technique adopted produces high quality suppositories in terms of content uniformity. The contents of paracetamol in the 120-mg paracetamol suppositories determined by DSC and UV spectrophotometry were statistically equivalent (Students's t-test; p>0.05), 120.8+/-2.6 mg and 120.8+/-1.5 mg, respectively, making DSC a clear alternative method for the measurement of content of drug in suppositories. The main advantages of the method are that samples of only 2.5-4.0 mg are required and the procedure does not require an extraction process, which allows for the analysis to be completed rapidly. In addition, it is highly sensitive and reproducible, with the lower detection limit at 4.0% w/w paracetamol, which is about 2.5 times lower than the content of paracetamol (10% w/w) present in our 120-mg paracetamol suppositories and commercial paracetamol suppositories, which contained about 125 mg paracetamol. Therefore, this method is particularly suited for determination of content uniformity in individual suppositories in quality control (QC) and in process quality control (PQC).
A new design approach to innovative spectrometers. Case study: TROPOLITE
NASA Astrophysics Data System (ADS)
Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob
2014-05-01
Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1984-01-01
Generally, fast direct solvers are not directly applicable to a nonseparable elliptic partial differential equation. This limitation, however, is circumvented by a semi-direct procedure, i.e., an iterative procedure using fast direct solvers. An efficient semi-direct procedure which is easy to implement and applicable to a variety of boundary conditions is presented. The current procedure also possesses other highly desirable properties, i.e.: (1) the convergence rate does not decrease with an increase of grid cell aspect ratio, and (2) the convergence rate is estimated using the coefficients of the partial differential equation being solved.
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things
Akan, Ozgur B.
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST). PMID:29538405
D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.
Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B
2018-01-01
Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).
Azharshekoufeh, Leila; Shokri, Javad; Barzegar-Jalali, Mohammad; Javadzadeh, Yousef
2017-01-01
Introduction: The potential of combining liquisolid and co-grinding technologies (liquiground technique) was investigated to improve the dissolution rate of a water-insoluble agent (glibenclamide) with formulation-dependent bioavailability. Methods: To this end, different formulations of liquisolid tablets with a wide variety of non-volatile solvents contained varied ratios of drug: solvent and dissimilar carriers were prepared, and then their release profiles were evaluated. Furthermore, the effect of size reduction by ball milling on the dissolution behavior of glibenclamide from liquisolid tablets was investigated. Any interaction between the drug and the excipient or crystallinity changes during formulation procedure was also examined using X-ray diffraction (XRD) and differential scanning calorimetry (DSC). Results: The present study revealed that classic liquisolid technique did not significantly affect the drug dissolution profile as compared to the conventional tablets. Size reduction obtained by co-grinding of liquid medication was more effective than the implementation of liquisolid technique in enhancing the dissolution rate of glibenclamide. The XRD and DSC data displayed no formation of complex or any crystallinity changes in both formulations. Conclusion: An enhanced dissolution rate of glibenclamide is achievable through the combination of liquisolid and co-grinding technologies.
Computer method for identification of boiler transfer functions
NASA Technical Reports Server (NTRS)
Miles, J. H.
1972-01-01
Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.
Modeling regional freight flow assignment through intermodal terminals
DOT National Transportation Integrated Search
2005-03-01
An analytical model is developed to assign regional freight across a multimodal highway and railway network using geographic information systems. As part of the regional planning process, the model is an iterative procedure that assigns multimodal fr...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
[Tissular expansion in giant congenital nevi treatment].
Nguyen Van Nuoi, V; Francois-Fiquet, C; Diner, P; Sergent, B; Zazurca, F; Franchi, G; Buis, J; Vazquez, M-P; Picard, A; Kadlub, N
2014-08-01
Surgical management of giant melanotic naevi remains a surgical challenge. Tissue expansion provides tissue of the same quality for the repair of defects. The aim of this study is to review tissular expansion for giant melanotic naevi. We conducted a retrospective study from 2000 to 2012. All children patients who underwent a tissular expansion for giant congenital naevi had been included. Epidemiological data, surgical procedure, complication rate and results had been analysed. Thirty-tree patients had been included; they underwent 61 procedures with 79 tissular-expansion prosthesis. Previous surgery, mostly simple excision had been performed before tissular expansion. Complete naevus excision had been performed in 63.3% of the cases. Complications occurred in 45% of the cases, however in 50% of them were minor. Iterative surgery increased the complication rate. Tissular expansion is a valuable option for giant congenital naevus. However, complication rate remained high, especially when iterative surgery is needed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Communication: A difference density picture for the self-consistent field ansatz.
Parrish, Robert M; Liu, Fang; Martínez, Todd J
2016-04-07
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Communication: A difference density picture for the self-consistent field ansatz
NASA Astrophysics Data System (ADS)
Parrish, Robert M.; Liu, Fang; Martínez, Todd J.
2016-04-01
We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Sewell, Holly L.; Kaster, Anne-Kristin
2017-01-01
ABSTRACT The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi. In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi. 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4) in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2) as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf) complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn) and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi. We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA) in the phylum Chloroflexi (DscP3 and Dsc4), including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI) with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187) of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of reductive dehalogenation, as well as the biology of abundant deep-sea Chloroflexi. PMID:29259088
"Sticky electrons" transport and interfacial transfer of electrons in the dye-sensitized solar cell.
Peter, Laurence
2009-11-17
Dye-sensitized solar cells (DSCs, also known as Gratzel cells) mimic the photosynthetic process by using a sensitizer dye to harvest light energy to generate electrical power. Several functional features of these photochemical devices are unusual, and DSC research offers a rewarding arena in which to test new ideas, new materials, and new methodologies. Indeed, one of the most attractive chemical features of the DSC is that the basic concept can be used to construct a range of devices, replacing individual components with alternative materials. Despite two decades of increasing research activity, however, many aspects of the behavior of electrons in the DSC remain puzzling. In this Account, we highlight current understanding of the processes involved in the functioning of the DSC, with particular emphasis on what happens to the electrons in the mesoporous film following the injection step. The collection of photoinjected electrons appears to involve a random walk process in which electrons move through the network of interconnected titanium dioxide nanoparticles while undergoing frequent trapping and detrapping. During their passage to the cell contact, electrons may be lost by transfer to tri-iodide species in the redox electrolyte that permeates the mesoporous film. Competition between electron collection and back electron transfer determines the performance of a DSC: ideally, all injected electrons should be collected without loss. This Account then goes on to survey recent experimental and theoretical progress in the field, placing particular emphasis on issues that need to be resolved before we can gain a clear picture of how the DSC works. Several important questions about the behavior of "sticky" electrons, those that undergo multiple trapping and detrapping, in the DSC remain unanswered. The most fundamental of these concerns is the nature of the electron traps that appear to dominate the time-dependent photocurrent and photovoltage response of DSCs. The origin of the nonideality factor in the relationship between the intensity and the DSC photovoltage is also unclear, as is the discrepancy in electron diffusion length values determined by steady-state and non-steady-state methods. With these unanswered questions, DSC research is likely to remain an active and fruitful area for some years to come.
NASA Technical Reports Server (NTRS)
Harris, J. E.; Blanchard, D. K.
1982-01-01
A numerical algorithm and computer program are presented for solving the laminar, transitional, or turbulent two dimensional or axisymmetric compressible boundary-layer equations for perfect-gas flows. The governing equations are solved by an iterative three-point implicit finite-difference procedure. The software, program VGBLP, is a modification of the approach presented in NASA TR R-368 and NASA TM X-2458, respectively. The major modifications are: (1) replacement of the fourth-order Runge-Kutta integration technique with a finite-difference procedure for numerically solving the equations required to initiate the parabolic marching procedure; (2) introduction of the Blottner variable-grid scheme; (3) implementation of an iteration scheme allowing the coupled system of equations to be converged to a specified accuracy level; and (4) inclusion of an iteration scheme for variable-entropy calculations. These modifications to the approach presented in NASA TR R-368 and NASA TM X-2458 yield a software package with high computational efficiency and flexibility. Turbulence-closure options include either two-layer eddy-viscosity or mixing-length models. Eddy conductivity is modeled as a function of eddy viscosity through a static turbulent Prandtl number formulation. Several options are provided for specifying the static turbulent Prandtl number. The transitional boundary layer is treated through a streamwise intermittency function which modifies the turbulence-closure model. This model is based on the probability distribution of turbulent spots and ranges from zero to unity for laminar and turbulent flow, respectively. Several test cases are presented as guides for potential users of the software.
Ait Kaci Azzou, S; Larribe, F; Froda, S
2016-10-01
In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.
An efficient strongly coupled immersed boundary method for deforming bodies
NASA Astrophysics Data System (ADS)
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
NASA Astrophysics Data System (ADS)
Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao
2018-06-01
In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1976-01-01
An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Single-shot dual-wavelength in-line and off-axis hybrid digital holography
NASA Astrophysics Data System (ADS)
Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie
2018-02-01
We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.
Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics
NASA Technical Reports Server (NTRS)
Sutjahjo, Edhi; Chamis, Christos C.
1993-01-01
Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.
Computer-Aided Design Of Turbine Blades And Vanes
NASA Technical Reports Server (NTRS)
Hsu, Wayne Q.
1988-01-01
Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Thermodynamics of micellization from heat-capacity measurements.
Šarac, Bojan; Bešter-Rogač, Marija; Lah, Jurij
2014-06-23
Differential scanning calorimetry (DSC), the most important technique for studying the thermodynamics of structural transitions of biological macromolecules, is seldom used in quantitative thermodynamic studies of surfactant micellization/demicellization. The reason for this could be ascribed to an insufficient understanding of the temperature dependence of the heat capacity of surfactant solutions (DSC data) in terms of thermodynamics, which leads to problems with the design of experiments and interpretation of the output signals. We address these issues by careful design of DSC experiments performed with solutions of ionic and nonionic surfactants at various surfactant concentrations, and individual and global mass-action model analysis of the obtained DSC data. Our approach leads to reliable thermodynamic parameters of micellization for all types of surfactants, comparable with those obtained by using isothermal titration calorimetry (ITC). In summary, we demonstrate that DSC can be successfully used as an independent method to obtain temperature-dependent thermodynamic parameters for micellization. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Degradation of components in drug formulations: a comparison between HPLC and DSC methods.
Ceschel, G C; Badiello, R; Ronchi, C; Maffei, P
2003-08-08
Information about the stability of drug components and drug formulations is needed to predict the shelf-life of the final products. The studies on the interaction between the drug and the excipients may be carried out by means of accelerated stability tests followed by analytical determination of the active principle (HPLC and other methods) and by means of the differential scanning calorimetry (DSC). This research has been focused to the acetyl salicylic acid (ASA) physical-chemical characterisation by using DSC method in order to evaluate its compatibility with some of the most used excipients. It was possible to show, with the DSC method, the incompatibility of magnesium stearate with ASA; the HPLC data confirm the reduction of ASA concentration in the presence of magnesium stearate. With the other excipients the characteristic endotherms of the drug were always present and no or little degradation was observed with the accelerated stability tests. Therefore, the results with the DSC method are comparable and in good agreement with the results obtained with other methods.
DSC of human hair: a tool for claim support or incorrect data analysis?
Popescu, C; Gummer, C
2016-10-01
Differential scanning calorimetry (DSC) data are increasingly used to substantiate product claims of hair repair. Decreasing peak temperatures may indicate structural changes and chemical damage. Increasing the DSC, wet peak temperature is, therefore, often considered as proof of hair repair. A detailed understanding of the technique and hair structure indicates that this may not be a sound approach. Surveying the rich literature on the use of dynamic thermal analysis (DTA) and differential scanning calorimetry (DSC) for the analyses of human hair and the effect of cosmetic treatments, we underline some of the problems of hair structure and data interpretation. To overcome some of the difficulties of data interpretation, we advise that DSC acquired data should be supported by other techniques when used for claim substantiation. In this way, one can provide meaningful interpretation of the hair science and robust data for product claims support. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Effect of milling on DSC thermogram of excipient adipic acid.
Ng, Wai Kiong; Kwek, Jin Wang; Yuen, Aaron; Tan, Chin Lee; Tan, Reginald
2010-03-01
The purpose of this research was to investigate why and how mechanical milling results in an unexpected shift in differential scanning calorimetry (DSC) measured fusion enthalpy (Delta(fus)H) and melting point (T(m)) of adipic acid, a pharmaceutical excipient. Hyper differential scanning calorimetry (hyper-DSC) was used to characterize adipic acid before and after ball-milling. An experimental study was conducted to evaluate previous postulations such as electrostatic charging using the Faraday cage method, crystallinity loss using powder X-ray diffraction (PXRD), thermal annealing using DSC, impurities removal using thermal gravimetric analysis (TGA) and Karl Fischer titration. DSC thermograms showed that after milling, the values of Delta(fus)H and T(m) were increased by approximately 9% and 5 K, respectively. Previous suggestions of increased electrostatic attraction, change in particle size distribution, and thermal annealing during measurements did not explain the differences. Instead, theoretical analysis and experimental findings suggested that the residual solvent (water) plays a key role. Water entrapped as inclusions inside adipic acid during solution crystallization was partially evaporated by localized heating at the cleaved surfaces during milling. The correlation between the removal of water and melting properties measured was shown via drying and crystallization experiments. These findings show that milling can reduce residual solvent content and causes a shift in DSC results.
Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2011-01-01
This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
NASA Astrophysics Data System (ADS)
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Malek, H.
1978-01-01
A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.
NASA Astrophysics Data System (ADS)
Boski, Marcin; Paszke, Wojciech
2015-11-01
This paper deals with the problem of designing an iterative learning control algorithm for discrete linear systems using repetitive process stability theory. The resulting design produces a stabilizing output feedback controller in the time domain and a feedforward controller that guarantees monotonic convergence in the trial-to-trial domain. The results are also extended to limited frequency range design specification. New design procedure is introduced in terms of linear matrix inequality (LMI) representations, which guarantee the prescribed performances of ILC scheme. A simulation example is given to illustrate the theoretical developments.
Cavallin, L; Axelsson, R; Wahlund, L O; Oksengard, A R; Svensson, L; Juhlin, P; Wiberg, M Kristoffersen; Frank, A
2008-12-01
Current diagnosis of Alzheimer disease is made by clinical, neuropsychologic, and neuroimaging assessments. Neuroimaging techniques such as magnetic resonance imaging (MRI) and single-photon emission computed tomography (SPECT) could be valuable in the differential diagnosis of Alzheimer disease, as well as in assessing prognosis. To compare SPECT and MRI in a cohort of patients examined for suspected dementia, including patients with no objective cognitive impairment (control group), mild cognitive impairment (MCI), and Alzheimer disease (AD). 24 patients, eight with AD, 10 with MCI, and six controls, were investigated with SPECT using (99m)Tc-hexamethylpropyleneamine oxime (HMPAO, Ceretec; GE Healthcare Ltd., Little Chalsont UK) and dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) with a contrast-enhancing gadobutrol formula (Gadovist; Bayer Schering Pharma, Berlin, Germany). Voxel-based correlation between coregistered SPECT and DSC-MR images was calculated. Region-of-interest (ROI) analyses were then performed in 24 different brain areas using brain registration and analysis of SPECT studies (BRASS; Nuclear Diagnostics AB, Stockholm, Sweden) on both SPECT and DSC-MRI. Voxel-based correlation between coregistered SPECT and DSC-MR showed a high correlation, with a mean correlation coefficient of 0.94. ROI analyses of 24 regions showed significant differences between the control group and AD patients in 10 regions using SPECT and five regions in DSC-MR. SPECT remains superior to DSC-MRI in differentiating normal from pathological perfusion, and DSC-MRI could not replace SPECT in the diagnosis of patients with Alzheimer disease.
In-vessel tritium retention and removal in ITER
NASA Astrophysics Data System (ADS)
Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.
Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.
The fractal geometry of Hartree-Fock
NASA Astrophysics Data System (ADS)
Theel, Friethjof; Karamatskou, Antonia; Santra, Robin
2017-12-01
The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.
OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.
Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L
2017-10-05
The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Liu, Wanli
2017-03-08
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Iterative Noise Elimination Preliminary Report.
1983-01-01
there is no very good way to remove them. The purpose of the present report is to describe the procedure and to show the results of a series of tests with data from a computed tomography x-ray scan of a defective batery .
The road to JCAHO disease-specific care certification: a step-by-step process log.
Morrison, Kathy
2005-01-01
In 2002, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) implemented Disease-Specific Care (DSC) certification. This is a voluntary program in which organizations have their disease management program evaluated by this regulatory agency. Some of the DSC categories are stroke, heart failure, acute MI, diabetes, and pneumonia. The criteria for any disease management program certification are: compliance with consensus-based national standards, effective use of established clinical practice guidelines to manage and optimize care, and an organized approach to performance measurement and improvement activities. Successful accomplishment of DSC certification defines organizations as Centers of Excellence in management of that particular disease. This article will review general guidelines for DSC certification with an emphasis on Primary Stroke Center certification.
NASA Astrophysics Data System (ADS)
Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond
1986-11-01
A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.
Mellaerts, Randy; Jammaer, Jasper A G; Van Speybroeck, Michiel; Chen, Hong; Van Humbeeck, Jan; Augustijns, Patrick; Van den Mooter, Guy; Martens, Johan A
2008-08-19
The ordered mesoporous silica material SBA-15 was loaded with the model drugs itraconazole and ibuprofen using three different procedures: (i) adsorption from solution, (ii) incipient wetness impregnation, and (iii) heating of a mixture of drug and SBA-15 powder. The location of the drug molecules in the SBA-15 particles and molecular interactions were investigated using nitrogen adsorption, TGA, DSC, DRS UV-vis, and XPS. The in vitro release of hydrophobic model drugs was evaluated in an aqueous environment simulating gastric fluid. The effectiveness of the loading method was found to be strongly compound dependent. Incipient wetness impregnation using a concentrated itraconazole solution in dichloromethane followed by solvent evaporation was most efficient for dispersing itraconazole in SBA-15. The itraconazole molecules were located on the mesopore walls and inside micropores of the mesopore walls. When SBA-15 was loaded by slurrying it in a diluted itraconazole solution from which the solvent was evaporated, the itraconazole molecules ended up in the mesopores that they plugged locally. At a loading of 30 wt %, itraconazole exhibited intermolecular interactions inside the mesopores revealed by UV spectroscopy and endothermic events traced with DSC. The physical mixing of itraconazole and SBA-15 powder followed by heating above the itraconazole melting temperature resulted in formulations in which glassy itraconazole particles were deposited externally on the SBA-15 particles. Loading with ibuprofen was successful with each of the three loading procedures. Ibuprofen preferably is positioned inside the micropores. In vitro release experiments showed fast release kinetics provided the drug molecules were evenly deposited over the mesoporous surface.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
NASA Astrophysics Data System (ADS)
Pham, Dzung L.; Han, Xiao; Rettmann, Maryam E.; Xu, Chenyang; Tosun, Duygu; Resnick, Susan; Prince, Jerry L.
2002-05-01
In previous work, the authors presented a multi-stage procedure for the semi-automatic reconstruction of the cerebral cortex from magnetic resonance images. This method suffered from several disadvantages. First, the tissue classification algorithm used can be sensitive to noise within the image. Second, manual interaction was required for masking out undesired regions of the brain image, such as the ventricles and putamen. Third, iterated median filters were used to perform a topology correction on the initial cortical surface, resulting in an overly smoothed initial surface. Finally, the deformable surface used to converge to the cortex had difficulty capturing narrow gyri. In this work, all four disadvantages of the procedure have been addressed. A more robust tissue classification algorithm is employed and the manual masking step is replaced by an automatic method involving level set deformable models. Instead of iterated median filters, an algorithm developed specifically for topology correction is used. The last disadvantage is addressed using an algorithm that artificially separates adjacent sulcal banks. The new procedure is more automated but also more accurate than the previous one. Its utility is demonstrated by performing a preliminary study on data from the Baltimore Longitudinal Study of Aging.
Rajab, Ghada Z; Suh, Soh Youn; Demer, Joseph L
2017-06-01
Dissociated strabismus complex (DSC) is an enigmatic form of strabismus that includes dissociated vertical deviation (DVD) and dissociated horizontal deviation (DHD). We employed magnetic resonance imaging (MRI) to evaluate the extraocular muscles in DSC. We studied 5 patients with DSC and mean age of 25 years (range, 12-42 years), and 15 age-matched, orthotropic control subjects. All patients had DVD; 4 also had DHD. We employed high-resolution, surface coil MRI with thin, 2 mm slices and central target fixation. Volumes of the rectus and superior oblique muscles in the region 12 mm posterior to 4 mm anterior to the globe-optic nerve junction were measured in quasi-coronal planes in central gaze. Patients with DSC had no structural abnormalities of rectus muscles or rectus pulleys or the superior oblique muscle but exhibited modest, statistically significant increased volume of all rectus muscles ranging from 20% for medial rectus to 9% for lateral rectus (P < 0.05). DSC includes various combinations of sursumduction, excycloduction, and abduction not conforming to Hering's law. We have found modest generalized enlargement of all rectus muscles. DSC is associated with generalized rectus extraocular muscle hypertrophy in the absence of other orbital abnormalities. Copyright © 2017 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Antong; Deeley, Matthew A.; Niermann, Kenneth J.
2010-12-15
Purpose: Intensity-modulated radiation therapy (IMRT) is the state of the art technique for head and neck cancer treatment. It requires precise delineation of the target to be treated and structures to be spared, which is currently done manually. The process is a time-consuming task of which the delineation of lymph node regions is often the longest step. Atlas-based delineation has been proposed as an alternative, but, in the authors' experience, this approach is not accurate enough for routine clinical use. Here, the authors improve atlas-based segmentation results obtained for level II-IV lymph node regions using an active shape model (ASM)more » approach. Methods: An average image volume was first created from a set of head and neck patient images with minimally enlarged nodes. The average image volume was then registered using affine, global, and local nonrigid transformations to the other volumes to establish a correspondence between surface points in the atlas and surface points in each of the other volumes. Once the correspondence was established, the ASMs were created for each node level. The models were then used to first constrain the results obtained with an atlas-based approach and then to iteratively refine the solution. Results: The method was evaluated through a leave-one-out experiment. The ASM- and atlas-based segmentations were compared to manual delineations via the Dice similarity coefficient (DSC) for volume overlap and the Euclidean distance between manual and automatic 3D surfaces. The mean DSC value obtained with the ASM-based approach is 10.7% higher than with the atlas-based approach; the mean and median surface errors were decreased by 13.6% and 12.0%, respectively. Conclusions: The ASM approach is effective in reducing segmentation errors in areas of low CT contrast where purely atlas-based methods are challenged. Statistical analysis shows that the improvements brought by this approach are significant.« less
NASA Astrophysics Data System (ADS)
Burger, A.; Morgan, S.; Jiang, H.; Silberman, E.; Schieber, M.; Van Den Berg, L.; Keller, L.; Wagner, C. N. J.
1989-11-01
High-temperature studies of mercuric iodide (HgI2) involving differential scanning calorimetry (DSC), Raman spectroscopy and X-ray powder diffraction have failed to confirm the existence of a red-colored tetragonal high-temperature phase called α'-HgI2 reported by S.N. Toubektsis et al. [J. Appl. Phys. 58 (1988) 2070] using DSC measurements. The multiple DSC peaks near melting reported by Toubektsis are found by the present authors only if the sample is heated in a stainless-steel container. Using a Pyrex container or inserting a platinum foil between the HgI2 and the stainless-steel container yields only one sharp, single DSC peak at the melting point. The nonexistence of the α' phase is confirmed by high-temperature X-ray diffraction and Raman spectroscopy performed in the vicinity of the melting point. These methods clearly, indicate the existence of only the yellow orthorhombic β-HgI2 phase. The experimental high-temperature DSC, Raman and X-ray diffraction data are presented and discussed.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Karlsson, Martin; Jõgi, Indrek; Eriksson, Susanna K; Rensmo, Håkan; Boman, Mats; Boschloo, Gerrit; Hagfeldt, Anders
2013-01-01
This paper describes the synthesis and characterization of core-shell structures, based on SnO2 and TiO2, for use in dye-sensitized solar cells (DSC). Atomic layer deposition is employed to control and vary the thickness of the TiO2 shell. Increasing the TiO2 shell thickness to 2 nm improved the device performance of liquid electrolyte-based DSC from 0.7% to 3.5%. The increase in efficiency originates from a higher open-circuit potential and a higher short-circuit current, as well as from an improvement in the electron lifetime. SnO2-TiO2 core-shell DSC devices retain their photovoltage in darkness for longer than 500 seconds, demonstrating that the electrons are contained in the core material. Finally core-shell structures were used for solid-state DSC applications using the hole transporting material 2,2',7,7',-tetrakis(N, N-di-p-methoxyphenyl-amine)-9,9',-spirofluorene. Similar improvements in device performance were obtained for solid-state DSC devices.
Iterative outlier removal: A method for identifying outliers in laboratory recalibration studies
Parrinello, Christina M.; Grams, Morgan E.; Sang, Yingying; Couper, David; Wruck, Lisa M.; Li, Danni; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Extreme values that arise for any reason, including through non-laboratory measurement procedure-related processes (inadequate mixing, evaporation, mislabeling), lead to outliers and inflate errors in recalibration studies. We present an approach termed iterative outlier removal (IOR) for identifying such outliers. Methods We previously identified substantial laboratory drift in uric acid measurements in the Atherosclerosis Risk in Communities (ARIC) Study over time. Serum uric acid was originally measured in 1990–92 on a Coulter DACOS instrument using an uricase-based measurement procedure. To recalibrate previous measured concentrations to a newer enzymatic colorimetric measurement procedure, uric acid was re-measured in 200 participants from stored plasma in 2011–13 on a Beckman Olympus 480 autoanalyzer. To conduct IOR, we excluded data points >3 standard deviations (SDs) from the mean difference. We continued this process using the resulting data until no outliers remained. Results IOR detected more outliers and yielded greater precision in simulation. The original mean difference (SD) in uric acid was 1.25 (0.62) mg/dL. After four iterations, 9 outliers were excluded, and the mean difference (SD) was 1.23 (0.45) mg/dL. Conducting only one round of outlier removal (standard approach) would have excluded 4 outliers (mean difference [SD] = 1.22 [0.51] mg/dL). Applying the recalibration (derived from Deming regression) from each approach to the original measurements, the prevalence of hyperuricemia (>7 mg/dL) was 28.5% before IOR and 8.5% after IOR. Conclusion IOR is a useful method for removal of extreme outliers irrelevant to recalibrating laboratory measurements, and identifies more extraneous outliers than the standard approach. PMID:27197675
NASA Astrophysics Data System (ADS)
Albert, L.; Rottensteiner, F.; Heipke, C.
2015-08-01
Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.
Li, Chen; Li, Jian-Bin
2017-12-01
A novel drug delivery system based on chitosan derivatives was prepared by introducting ferulic acid to chitosan adopting a free radical-induced grafting procedure. This paper used an ascorbic acid/hydrogen peroxide redox pair as radical initiator. The chitosan derivative was characterized by Fourier transformed infrared (FTIR), Ultraviolet-visible spectrum (UV), Differential scanning calorimetry (DSC), X-ray diffraction (XRD) and Electron microscopic scanning (SEM). What is more, preparing microcapsules with the chitosan conjugate as wall material, the drug release propertie of chitosan conjugates were compared with that of a blank chitosan, which treated in the same conditions but in the absence of ferulic acid. The study clearly demonstrates that free radical-induced grafting procedure was an effective reaction methods and chitosan-ferulic acid is a potential functionalized carrier material for drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.
1980-01-01
Transport of Heat ..... .......... 8 3. THE SOLUTION PROCEDURE ..... .. ................. 8 3.1 The Finite-Difference Grid Network ... .......... 8 3.2...The Finite-Difference Grid Network. Figure 4: The Iterative Solution Procedure used at each Streamwise Station. Figure 5: Velocity Profiles in the...the finite-difference grid in the y-direction. I is the mixing length. L is the distance in the x-direction from the injection slot entrance to the
2010-09-01
matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the
Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M. S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method.
Fuel Burn Estimation Using Real Track Data
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Ghost suppression in image restoration filtering
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1975-01-01
An optimum image restoration filter is described in which provision is made to constrain the spatial extent of the restoration function, the noise level of the filter output and the rate of falloff of the composite system point-spread away from the origin. Experimental results show that sidelobes on the composite system point-spread function produce ghosts in the restored image near discontinuities in intensity level. By redetermining the filter using a penalty function that is zero over the main lobe of the composite point-spread function of the optimum filter and nonzero where the point-spread function departs from a smoothly decaying function in the sidelobe region, a great reduction in sidelobe level is obtained. Almost no loss in resolving power of the composite system results from this procedure. By iteratively carrying out the same procedure even further reductions in sidelobe level are obtained. Examples of original and iterated restoration functions are shown along with their effects on a test image.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
A comparison of multiprocessor scheduling methods for iterative data flow architectures
NASA Technical Reports Server (NTRS)
Storch, Matthew
1993-01-01
A comparative study is made between the Algorithm to Architecture Mapping Model (ATAMM) and three other related multiprocessing models from the published literature. The primary focus of all four models is the non-preemptive scheduling of large-grain iterative data flow graphs as required in real-time systems, control applications, signal processing, and pipelined computations. Important characteristics of the models such as injection control, dynamic assignment, multiple node instantiations, static optimum unfolding, range-chart guided scheduling, and mathematical optimization are identified. The models from the literature are compared with the ATAMM for performance, scheduling methods, memory requirements, and complexity of scheduling and design procedures.
Guillen, Donna Post; Harris, William H.
2016-05-11
A metal matrix composite (MMC) material comprised of hafnium aluminide (Al3Hf) intermetallic particles in an aluminum matrix has been identified as a promising material for fast-flux irradiation testing applications. This material can filter thermal neutrons while simultaneously providing high rates of conductive cooling for experiment capsules. Our purpose is to investigate effects of Hf-Al material composition and neutron irradiation on thermophysical properties, which were measured before and after irradiation. When performing differential scanning calorimetry (DSC) on the irradiated specimens, a large exotherm corresponding to material annealment was observed. Thus, a test procedure was developed to perform DSC and laser flashmore » analysis (LFA) to obtain the specific heat and thermal diffusivity of pre- and post-annealment specimens. This paper presents the thermal properties for three states of the MMC material: (1) unirradiated, (2) as-irradiated, and (3) irradiated and annealed. Microstructure-property relationships were obtained for the thermal conductivity. These relationships are useful for designing components from this material to operate in irradiation environments. Furthermore, the ability of this material to effectively conduct heat as a function of temperature, volume fraction Al 3Hf, radiation damage and annealing is assessed using the MOOSE suite of computational tools.« less
Physio-chemical reactions in recycle aggregate concrete.
Tam, Vivian W Y; Gao, X F; Tam, C M; Ng, K M
2009-04-30
Concrete waste constitutes the major proportion of construction waste at about 50% of the total waste generated. An effective way to reduce concrete waste is to reuse it as recycled aggregate (RA) for the production of recycled aggregate concrete (RAC). This paper studies the physio-chemical reactions of cement paste around aggregate for normal aggregate concrete (NAC) and RAC mixed with normal mixing approach (NMA) and two-stage mixing approach (TSMA) by differential scanning calorimetry (DSC) and scanning electron microscopy (SEM). Four kinds of physio-chemical reactions have been recorded from the concrete samples, including the dehydration of C(3)S(2)H(3), iron-substituted ettringite, dehydroxylation of CH and development of C(6)S(3)H at about 90 degrees C, 135 degrees C, 441 degrees C and 570 degrees C, respectively. From the DSC results, it is confirmed that the concrete samples with RA substitution have generated less amount of strength enhancement chemical products when compared to those without RA substitution. However, the results from the TSMA are found improving the RAC quality. The pre-mix procedure of the TSMA can effectively develop some strength enhancing chemical products including, C(3)S(2)H(3), ettringite, CH and C(6)S(3)H, which shows that RAC made from the TSMA can improve the hydration processes.
Region of interest processing for iterative reconstruction in x-ray computed tomography
NASA Astrophysics Data System (ADS)
Kopp, Felix K.; Nasirudin, Radin A.; Mei, Kai; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Noël, Peter B.
2015-03-01
The recent advancements in the graphics card technology raised the performance of parallel computing and contributed to the introduction of iterative reconstruction methods for x-ray computed tomography in clinical CT scanners. Iterative maximum likelihood (ML) based reconstruction methods are known to reduce image noise and to improve the diagnostic quality of low-dose CT. However, iterative reconstruction of a region of interest (ROI), especially ML based, is challenging. But for some clinical procedures, like cardiac CT, only a ROI is needed for diagnostics. A high-resolution reconstruction of the full field of view (FOV) consumes unnecessary computation effort that results in a slower reconstruction than clinically acceptable. In this work, we present an extension and evaluation of an existing ROI processing algorithm. Especially improvements for the equalization between regions inside and outside of a ROI are proposed. The evaluation was done on data collected from a clinical CT scanner. The performance of the different algorithms is qualitatively and quantitatively assessed. Our solution to the ROI problem provides an increase in signal-to-noise ratio and leads to visually less noise in the final reconstruction. The reconstruction speed of our technique was observed to be comparable with other previous proposed techniques. The development of ROI processing algorithms in combination with iterative reconstruction will provide higher diagnostic quality in the near future.
NASA Astrophysics Data System (ADS)
Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.
2017-07-01
Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.
Liu, Wanli
2017-01-01
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. PMID:28282897
Sewell, Holly L; Kaster, Anne-Kristin; Spormann, Alfred M
2017-12-19
The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi In this report, we investigated genomes of single cells obtained from deep-sea sediments of the Peruvian Margin, which are enriched in such Chloroflexi 16S rRNA gene sequence analysis placed two of these single-cell-derived genomes (DscP3 and Dsc4) in a clade of subphylum I Chloroflexi which were previously recovered from deep-sea sediment in the Okinawa Trough and a third (DscP2-2) as a member of the previously reported DscP2 population from Peruvian Margin site 1230. The presence of genes encoding enzymes of a complete Wood-Ljungdahl pathway, glycolysis/gluconeogenesis, a Rhodobacter nitrogen fixation (Rnf) complex, glyosyltransferases, and formate dehydrogenases in the single-cell genomes of DscP3 and Dsc4 and the presence of an NADH-dependent reduced ferredoxin:NADP oxidoreductase (Nfn) and Rnf in the genome of DscP2-2 imply a homoacetogenic lifestyle of these abundant marine Chloroflexi We also report here the first complete pathway for anaerobic benzoate oxidation to acetyl coenzyme A (CoA) in the phylum Chloroflexi (DscP3 and Dsc4), including a class I benzoyl-CoA reductase. Of remarkable evolutionary significance, we discovered a gene encoding a formate dehydrogenase (FdnI) with reciprocal closest identity to the formate dehydrogenase-like protein (complex iron-sulfur molybdoenzyme [CISM], DET0187) of terrestrial Dehalococcoides/Dehalogenimonas spp. This formate dehydrogenase-like protein has been shown to lack formate dehydrogenase activity in Dehalococcoides/Dehalogenimonas spp. and is instead hypothesized to couple HupL hydrogenase to a reductive dehalogenase in the catabolic reductive dehalogenation pathway. This finding of a close functional homologue provides an important missing link for understanding the origin and the metabolic core of terrestrial Dehalococcoides/Dehalogenimonas spp. and of reductive dehalogenation, as well as the biology of abundant deep-sea Chloroflexi IMPORTANCE The deep marine subsurface is one of the largest unexplored biospheres on Earth and is widely inhabited by members of the phylum Chloroflexi In this report, we investigated genomes of single cells obtained from deep-sea sediments and provide evidence for a homacetogenic lifestyle of these abundant marine Chloroflexi Moreover, genome signature and key metabolic genes indicate an evolutionary relationship between these deep-sea sediment microbes and terrestrial, reductively dehalogenating Dehalococcoides . Copyright © 2017 Sewell et al.
A calorimetric study of precipitation in aluminum alloy 2219
NASA Astrophysics Data System (ADS)
Papazian, John M.
1981-02-01
Precipitate microstructures in aluminum alloy 2219 were characterized using transmission electron microscopy (TEM) and differential scanning calorimetry (DSC). The DSC signatures of individual precipitate phases were established by comparing the DSC and TEM results from samples that had been aged such that only one precipitate phase was present. These signatures were then used to analyze the commercial tempers. It was found that DSC could readily distinguish between the T3, T4, T6, T8 and O tempers but could not distinguish amongst T81, T851 and T87. Small amounts of plastic deformation between solution treatment and aging had a significant effect on the thermograms. Aging experiments at 130 and 190 °C showed that the aging sequence and DSC response of this alloy were similar to those of pure Al-Cu when the increased copper content is taken into account. Further aging experiments at temperatures between room temperature and 130 °C showed pronounced changes of the GP zone dissolution peak as a function of aging conditions. These changes were found to be related to the effect of GP zone size on the metastable phase boundary and on the GP zone dissolution kinetics.
Extending radiative transfer models by use of Bayes rule. [in atmospheric science
NASA Technical Reports Server (NTRS)
Whitney, C.
1977-01-01
This paper presents a procedure that extends some existing radiative transfer modeling techniques to problems in atmospheric science where curvature and layering of the medium and dynamic range and angular resolution of the signal are important. Example problems include twilight and limb scan simulations. Techniques that are extended include successive orders of scattering, matrix operator, doubling, Gauss-Seidel iteration, discrete ordinates and spherical harmonics. The procedure for extending them is based on Bayes' rule from probability theory.
Comparative kinetic analysis on thermal degradation of some cephalosporins using TG and DSC data
2013-01-01
Background The thermal decomposition of cephalexine, cefadroxil and cefoperazone under non-isothermal conditions using the TG, respectively DSC methods, was studied. In case of TG, a hyphenated technique, including EGA, was used. Results The kinetic analysis was performed using the TG and DSC data in air for the first step of cephalosporin’s decomposition at four heating rates. The both TG and DSC data were processed according to an appropriate strategy to the following kinetic methods: Kissinger-Akahira-Sunose, Friedman, and NPK, in order to obtain realistic kinetic parameters, even if the decomposition process is a complex one. The EGA data offer some valuable indications about a possible decomposition mechanism. The obtained data indicate a rather good agreement between the activation energy’s values obtained by different methods, whereas the EGA data and the chemical structures give a possible explanation of the observed differences on the thermal stability. A complete kinetic analysis needs a data processing strategy using two or more methods, but the kinetic methods must also be applied to the different types of experimental data (TG and DSC). Conclusion The simultaneous use of DSC and TG data for the kinetic analysis coupled with evolved gas analysis (EGA) provided us a more complete picture of the degradation of the three cephalosporins. It was possible to estimate kinetic parameters by using three different kinetic methods and this allowed us to compare the Ea values obtained from different experimental data, TG and DSC. The thermodegradation being a complex process, the both differential and integral methods based on the single step hypothesis are inadequate for obtaining believable kinetic parameters. Only the modified NPK method allowed an objective separation of the temperature, respective conversion influence on the reaction rate and in the same time to ascertain the existence of two simultaneous steps. PMID:23594763
Analysis of Complex Intervention Effects in Time-Series Experiments.
ERIC Educational Resources Information Center
Bower, Cathleen
An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…
Designing Instructor-Led Schools with Rapid Prototyping.
ERIC Educational Resources Information Center
Lange, Steven R.; And Others
1996-01-01
Rapid prototyping involves abandoning many of the linear steps of traditional prototyping; it is instead a series of design iterations representing each major stage. This article describes the development of an instructor-led course for midlevel auditors using the principles and procedures of rapid prototyping, focusing on the savings in time and…
A study of the homogeneity and deviations from stoichiometry in mercuric iodide
NASA Astrophysics Data System (ADS)
Burger, A.; Morgan, S.; He, C.; Silberman, E.; van den Berg, L.; Ortale, C.; Franks, L.; Schieber, M.
1990-01-01
We have been able to determine the deviations from stoichiometry of mercuric iodide (HgI 2) by using differential scanning calorimetry (DSC). Mercury excess or iodine deficiency in mercuric iodide can be evaluated from the eutectic melting of α-Hgl 2-Hg 2I 2 at 235 °C, which appears as an additional peak in DSC thermograms. I 2 excess can be found from the existence of the I 2-α-HgI 2 eutectic melting at 103°C. An additional DSC peak appears in some samples around 112°C, that could be explained by the presence of iodine inclusions. Using resonance fluorescence spectroscopy (RFS) we have been able to determine the presence of free I 2 that is released by samples during the heating at 120 °C (crystal growth temperature), thus giving additional support to the above DSC results.
NASA Astrophysics Data System (ADS)
Gan, Lei; Zhang, Chunxia; Shangguan, Fangqin; Li, Xiuping
2012-06-01
The continuous cooling crystallization of a blast furnace slag was studied by the application of the differential scanning calorimetry (DSC) method. A kinetic model describing the correlation between the evolution of the degree of crystallization with time was obtained. Bulk cooling experiments of the molten slag coupled with numerical simulation of heat transfer were conducted to validate the results of the DSC methods. The degrees of crystallization of the samples from the bulk cooling experiments were estimated by means of the X-ray diffraction (XRD) and the DSC method. It was found that the results from the DSC cooling and bulk cooling experiments are in good agreement. The continuous cooling transformation (CCT) diagram of the blast furnace slag was constructed according to crystallization kinetic model and experimental data. The obtained CCT diagram characterizes with two crystallization noses at different temperature ranges.
Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A
2016-07-01
Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.
Resizing procedure for structures under combined mechanical and thermal loading
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Narayanaswami, R.
1976-01-01
The fully-stressed design (FSD) appears to be the most widely used approach for sizing of flight structures under strength and minimum-gage constraints. Almost all of the experience with FSD has been with structures primarily under mechanical loading as opposed to thermal loading. In this method the structural sizes are iterated with the step size, depending on the ratio of the total stress to the allowable stress. In this paper, the thermal fully-stressed design (TFSD) procedure developed for problems involving substantial thermal stress is extended to biaxial stress members using a Von Mises failure criterion. The TFSD resizing procedure for uniaxial stress is restated and the new procedure for biaxial stress members is developed. Results are presented for an application of the two procedures to size a simplified wing structure.
Best, Rebecca R; Harris, Benjamin H L; Walsh, Jason L; Manfield, Timothy
2017-05-08
Drowning is one of the leading causes of death in children. Resuscitating a child following submersion is a high-pressure situation, and standard operating procedures can reduce error. Currently, the Resuscitation Council UK guidance does not include a standard operating procedure on pediatric drowning. The objective of this project was to design a standard operating procedure to improve outcomes of drowned children. A literature review on the management of pediatric drowning was conducted. Relevant publications were used to develop a standard operating procedure for management of pediatric drowning. A concise standard operating procedure was developed for resuscitation following pediatric submersion. Specific recommendations include the following: the Heimlich maneuver should not be used in this context; however, prolonged resuscitation and therapeutic hypothermia are recommended. This standard operating procedure is a potentially useful adjunct to the Resuscitation Council UK guidance and should be considered for incorporation into its next iteration.
Accurate multi-robot targeting for keyhole neurosurgery based on external sensor monitoring.
Comparetti, Mirko Daniele; Vaccarella, Alberto; Dyagilev, Ilya; Shoham, Moshe; Ferrigno, Giancarlo; De Momi, Elena
2012-05-01
Robotics has recently been introduced in surgery to improve intervention accuracy, to reduce invasiveness and to allow new surgical procedures. In this framework, the ROBOCAST system is an optically surveyed multi-robot chain aimed at enhancing the accuracy of surgical probe insertion during keyhole neurosurgery procedures. The system encompasses three robots, connected as a multiple kinematic chain (serial and parallel), totalling 13 degrees of freedom, and it is used to automatically align the probe onto a desired planned trajectory. The probe is then inserted in the brain, towards the planned target, by means of a haptic interface. This paper presents a new iterative targeting approach to be used in surgical robotic navigation, where the multi-robot chain is used to align the surgical probe to the planned pose, and an external sensor is used to decrease the alignment errors. The iterative targeting was tested in an operating room environment using a skull phantom, and the targets were selected on magnetic resonance images. The proposed targeting procedure allows about 0.3 mm to be obtained as the residual median Euclidean distance between the planned and the desired targets, thus satisfying the surgical accuracy requirements (1 mm), due to the resolution of the diffused medical images. The performances proved to be independent of the robot optical sensor calibration accuracy.
Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C
2003-01-01
The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.
Efficient Coupling of Fluid-Plasma and Monte-Carlo-Neutrals Models for Edge Plasma Transport
NASA Astrophysics Data System (ADS)
Dimits, A. M.; Cohen, B. I.; Friedman, A.; Joseph, I.; Lodestro, L. L.; Rensink, M. E.; Rognlien, T. D.; Sjogreen, B.; Stotler, D. P.; Umansky, M. V.
2017-10-01
UEDGE has been valuable for modeling transport in the tokamak edge and scrape-off layer due in part to its efficient fully implicit solution of coupled fluid neutrals and plasma models. We are developing an implicit coupling of the kinetic Monte-Carlo (MC) code DEGAS-2, as the neutrals model component, to the UEDGE plasma component, based on an extension of the Jacobian-free Newton-Krylov (JFNK) method to MC residuals. The coupling components build on the methods and coding already present in UEDGE. For the linear Krylov iterations, a procedure has been developed to ``extract'' a good preconditioner from that of UEDGE. This preconditioner may also be used to greatly accelerate the convergence rate of a relaxed fixed-point iteration, which may provide a useful ``intermediate'' algorithm. The JFNK method also requires calculation of Jacobian-vector products, for which any finite-difference procedure is inaccurate when a MC component is present. A semi-analytical procedure that retains the standard MC accuracy and fully kinetic neutrals physics is therefore being developed. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and LDRD project 15-ERD-059, by PPPL under Contract DE-AC02-09CH11466, and supported in part by the U.S. DOE, OFES.
Religiousness, Spirituality, and Salivary Cortisol in Breast Cancer Survivorship: A Pilot Study.
Hulett, Jennifer M; Armer, Jane M; Leary, Emily; Stewart, Bob R; McDaniel, Roxanne; Smith, Kandis; Millspaugh, Rami; Millspaugh, Joshua
Psychoneuroimmunological theory suggests a physiological relationship exists between stress, psychosocial-behavioral factors, and neuroendocrine-immune outcomes; however, evidence has been limited. The primary aim of this pilot study was to determine feasibility and acceptability of a salivary cortisol self-collection protocol with a mail-back option for breast cancer survivors. A secondary aim was to examine relationships between religiousness/spirituality (R/S), perceptions of health, and diurnal salivary cortisol (DSC) as a proxy measure for neuroendocrine activity. This was an observational, cross-sectional study. Participants completed measures of R/S, perceptions of health, demographics, and DSC. The sample was composed of female breast cancer survivors (n = 41). Self-collection of DSC using a mail-back option was feasible; validity of mailed salivary cortisol biospecimens was established. Positive spiritual beliefs were the only R/S variable associated with the peak cortisol awakening response (rs = 0.34, P = .03). Poorer physical health was inversely associated with positive spiritual experiences and private religious practices. Poorer mental health was inversely associated with spiritual coping and negative spiritual experiences. Feasibility, validity, and acceptability of self-collected SDC biospecimens with an optional mail-back protocol (at moderate temperatures) were demonstrated. Positive spiritual beliefs were associated with neuroendocrine-mediated peak cortisol awakening response activity; however, additional research is recommended. Objective measures of DSC sampling that include enough collection time points to assess DSC parameters would increase the rigor of future DSC measurement. Breast cancer survivors may benefit from nursing care that includes spiritual assessment and therapeutic conversations that support positive spiritual beliefs.
Kuipers works with DSC Hardware in the U.S. Laboratory
2012-01-16
ISS030-E-155917 (16 Jan. 2012) --- European Space Agency astronaut Andre Kuipers, Expedition 30 flight engineer, prepares to place Diffusion Soret Coefficient (DSC) hardware in stowage containers in the Destiny laboratory of the International Space Station.
Liu, Shiyuan; Xu, Shuang; Wu, Xiaofei; Liu, Wei
2012-06-18
This paper proposes an iterative method for in situ lens aberration measurement in lithographic tools based on a quadratic aberration model (QAM) that is a natural extension of the linear model formed by taking into account interactions among individual Zernike coefficients. By introducing a generalized operator named cross triple correlation (CTC), the quadratic model can be calculated very quickly and accurately with the help of fast Fourier transform (FFT). The Zernike coefficients up to the 37th order or even higher are determined by solving an inverse problem through an iterative procedure from several through-focus aerial images of a specially designed mask pattern. The simulation work has validated the theoretical derivation and confirms that such a method is simple to implement and yields a superior quality of wavefront estimate, particularly for the case when the aberrations are relatively large. It is fully expected that this method will provide a useful practical means for the in-line monitoring of the imaging quality of lithographic tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space
NASA Astrophysics Data System (ADS)
Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus
2018-05-01
We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.
Cosmic Microwave Background Mapmaking with a Messenger Field
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.; Næss, Sigurd K.
2018-01-01
We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.
An efficient algorithm using matrix methods to solve wind tunnel force-balance equations
NASA Technical Reports Server (NTRS)
Smith, D. L.
1972-01-01
An iterative procedure applying matrix methods to accomplish an efficient algorithm for automatic computer reduction of wind-tunnel force-balance data has been developed. Balance equations are expressed in a matrix form that is convenient for storing balance sensitivities and interaction coefficient values for online or offline batch data reduction. The convergence of the iterative values to a unique solution of this system of equations is investigated, and it is shown that for balances which satisfy the criteria discussed, this type of solution does occur. Methods for making sensitivity adjustments and initial load effect considerations in wind-tunnel applications are also discussed, and the logic for determining the convergence accuracy limits for the iterative solution is given. This more efficient data reduction program is compared with the technique presently in use at the NASA Langley Research Center, and computational times on the order of one-third or less are demonstrated by use of this new program.
Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.
2018-01-01
The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.
Precision tuning of InAs quantum dot emission wavelength by iterative laser annealing
NASA Astrophysics Data System (ADS)
Dubowski, Jan J.; Stanowski, Radoslaw; Dalacu, Dan; Poole, Philip J.
2018-07-01
Controlling the emission wavelength of quantum dots (QDs) over large surface area wafers is challenging to achieve directly through epitaxial growth methods. We have investigated an innovative post growth laser-based tuning procedure of the emission of self-assembled InAs QDs grown epitaxially on InP (001). A targeted blue shift of the emission is achieved with a series of iterative steps, with photoluminescence diagnostics employed between the steps to monitor the result of intermixing. We demonstrate tuning of the emission wavelength of ensembles of QDs to within approximately ±1 nm, while potentially better precision should be achievable for tuning the emission of individual QDs.
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
Realization of Comfortable Massage by Using Iterative Learning Control Based on EEG
NASA Astrophysics Data System (ADS)
Teramae, Tatsuya; Kushida, Daisuke; Takemori, Fumiaki; Kitamura, Akira
Recently the massage chair is used by a lot of people because they are able to use it easily at home. However a present massage chair only realizes the massage motion. Moreover the massage chair can not consider the user’s condition and massage force. On the other hand, the professional masseur is according to presume the mental condition by patient’s reaction. Then this paper proposes the method of applying masseur’s procedure for the massage chair using iterative learning control based on EEG. And massage force is estimated by acceleration sensor. The realizability of the proposed method is verified by the experimental works using the massage chair.
Nonlinear system guidance in the presence of transmission zero dynamics
NASA Technical Reports Server (NTRS)
Meyer, G.; Hunt, L. R.; Su, R.
1995-01-01
An iterative procedure is proposed for computing the commanded state trajectories and controls that guide a possibly multiaxis, time-varying, nonlinear system with transmission zero dynamics through a given arbitrary sequence of control points. The procedure is initialized by the system inverse with the transmission zero effects nulled out. Then the 'steady state' solution of the perturbation model with the transmission zero dynamics intact is computed and used to correct the initial zero-free solution. Both time domain and frequency domain methods are presented for computing the steady state solutions of the possibly nonminimum phase transmission zero dynamics. The procedure is illustrated by means of linear and nonlinear examples.
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Ultra-high speed digital micro-mirror device based ptychographic iterative engine method
Sun, Aihui; He, Xiaoliang; Kong, Yan; Cui, Haoyang; Song, Xiaojun; Xue, Liang; Wang, Shouyu; Liu, Cheng
2017-01-01
To reduce the long data acquisition time of the common mechanical scanning based Ptychographic Iterative Engine (PIE) technique, the digital micro-mirror device (DMD) is used to form the fast scanning illumination on the sample. Since the transverse mechanical scanning in the common PIE is replaced by the on/off switching of the micro-mirrors, the data acquisition time can be reduced from more than 15 minutes to less than 20 seconds for recording 12 × 10 diffraction patterns to cover the same field of 147.08 mm2. Furthermore, since the precision of DMD fabricated with the optical lithography is always higher than 10 nm (1 μm for the mechanical translation stage), the time consuming position-error-correction procedure is not required in the iterative reconstruction. These two improvements fundamentally speed up both the data acquisition and the reconstruction procedures in PIE, and relax its requirements on the stability of the imaging system, therefore remarkably improve its applicability for many practices. It is demonstrated experimentally with both USAF resolution target and biological sample that, the spatial resolution of 5.52 μm and the field of view of 147.08 mm2 can be reached with the DMD based PIE method. In a word, by using the DMD to replace the translation stage, we can effectively overcome the main shortcomings of common PIE related to the mechanical scanning, while keeping its advantages on both the high resolution and large field of view. PMID:28717560
NASA Astrophysics Data System (ADS)
Zhang, Fan; Szilágyi, Béla
2013-10-01
At the beginning of binary black hole simulations, there is a pulse of spurious radiation (or junk radiation) resulting from the initial data not matching astrophysical quasi-equilibrium inspiral exactly. One traditionally waits for the junk radiation to exit the computational domain before taking physical readings, at the expense of throwing away a segment of the evolution, and with the hope that junk radiation exits cleanly. We argue that this hope does not necessarily pan out, as junk radiation could excite long-lived constraint violation. Another complication with the initial data is that they contain orbital eccentricity that needs to be removed, usually by evolving the early part of the inspiral multiple times with gradually improved input parameters. We show that this procedure is also adversely impacted by junk radiation. In this paper, we do not attempt to eliminate junk radiation directly, but instead tackle the much simpler problem of ameliorating its long-lasting effects. We report on the success of a method that achieves this goal by combining the removal of junk radiation and eccentricity into a single procedure. Namely, we periodically stop a low resolution simulation; take the numerically evolved metric data and overlay it with eccentricity adjustments; run it through an initial data solver (i.e. the solver receives as free data the numerical output of the previous iteration); restart the simulation; repeat until eccentricity becomes sufficiently low; and then launch the high resolution “production run” simulation. This approach has the following benefits: (1) We do not have to contend with the influence of junk radiation on eccentricity measurements for later iterations of the eccentricity reduction procedure. (2) We reenforce constraints every time the initial data solver is invoked, removing the constraint violation excited by junk radiation previously. (3) The wasted simulation segment associated with the junk radiation’s evolution is absorbed into the eccentricity reduction iterations. Furthermore, (1) and (2) together allow us to carry out our joint-elimination procedure at low resolution, even when the subsequent “production run” is intended as a high resolution simulation.
Lin, Hong-Liang; Zhang, Gang-Chun; Huang, Yu-Ting; Lin, Shan-Yang
2014-08-01
The impact of thermal stress on indomethacin (IMC)-nicotinamide (NIC) cocrystal formation with or without neat cogrinding was investigated using differential scanning calorimetry (DSC), Fourier transform infrared (FTIR) microspectroscopy, and simultaneous DSC-FTIR microspectroscopy in the solid or liquid state. Different evaporation methods for preparing IMC-NIC cocrystals were also compared. The results indicated that even after cogrinding for 40 min, the FTIR spectra for all IMC-NIC ground mixtures were superimposable on the FTIR spectra of IMC and NIC components, suggesting there was no cocrystal formation between IMC and NIC after cogrinding. However, these IMC-NIC ground mixtures appear to easily undergo cocrystal formation after the application of DSC determination. Under thermal stress induced by DSC, the amount of cocrystal formation increased with increasing cogrinding time. Moreover, simultaneous DSC-FTIR microspectroscopy was a useful one-step technique to induce and clarify the thermal-induced stepwise mechanism of IMC-NIC cocrystal formation from the ground mixture in real time. Different solvent evaporation rates induced by thermal stress significantly influenced IMC-NIC cocrystal formation in the liquid state. In particular, microwave heating may promote IMC-NIC cocrystal formation in a short time. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Increasing High School Student Interest in Science: An Action Research Study
NASA Astrophysics Data System (ADS)
Vartuli, Cindy A.
An action research study was conducted to determine how to increase student interest in learning science and pursuing a STEM career. The study began by exploring 10th-grade student and teacher perceptions of student interest in science in order to design an instructional strategy for stimulating student interest in learning and pursuing science. Data for this study included responses from 270 students to an on-line science survey and interviews with 11 students and eight science teachers. The action research intervention included two iterations of the STEM Career Project. The first iteration introduced four chemistry classes to the intervention. The researcher used student reflections and a post-project survey to determine if the intervention had influence on the students' interest in pursuing science. The second iteration was completed by three science teachers who had implemented the intervention with their chemistry classes, using student reflections and post-project surveys, as a way to make further procedural refinements and improvements to the intervention and measures. Findings from the exploratory phase of the study suggested students generally had interest in learning science but increasing that interest required including personally relevant applications and laboratory experiences. The intervention included a student-directed learning module in which students investigated three STEM careers and presented information on one of their chosen careers. The STEM Career Project enabled students to explore career possibilities in order to increase their awareness of STEM careers. Findings from the first iteration of the intervention suggested a positive influence on student interest in learning and pursuing science. The second iteration included modifications to the intervention resulting in support for the findings of the first iteration. Results of the second iteration provided modifications that would allow the project to be used for different academic levels. Insights from conducting the action research study provided the researcher with effective ways to make positive changes in her own teaching praxis and the tools used to improve student awareness of STEM career options.
Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W
2016-05-21
The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.
Crystallization dynamics in glass-forming systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cullinan, Timothy Edward
Crystallization under far-from-equilibrium conditions is investigated for two different scenarios: crystallization of the metallic glass alloy Cu 50Zr 50 and solidification of a transparent organic compound, o-terphenyl. For Cu 50Zr 50, crystallization kinetics are quanti ed through a new procedure that directly fits thermal analysis data to the commonly utilized JMAK model. The phase evolution during crystallization is quantified through in-situ measurements (HEXRD, DSC) and ex-situ microstructural analysis (TEM, HRTEM). The influence of chemical partitioning, diffusion, and crystallographic orientation on this sequence are examined. For o-terphenyl, the relationship between crystal growth velocity and interface undercooling is systematically studied via directionalmore » solidification.« less
Code of Federal Regulations, 2011 CFR
2011-01-01
... Earned Ratio (TIER), Debt Service Coverage (DSC), and other case-specific economic and financial factors; (ii) The variability and uncertainty of future revenues, costs, margins, TIER, DSC, and other case... construction work orders and other records, all moneys disbursed from the separate subaccount during the period...
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2016-01-01
Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons' perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents' efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons' teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Apramian, Tavis; Cristancho, Sayra; Watling, Chris; Ott, Michael; Lingard, Lorelei
2017-01-01
OBJECTIVE Clinical research increasingly acknowledges the existence of significant procedural variation in surgical practice. This study explored surgeons’ perspectives regarding the influence of intersurgeon procedural variation on the teaching and learning of surgical residents. DESIGN AND SETTING This qualitative study used a grounded theory-based analysis of observational and interview data. Observational data were collected in 3 tertiary care teaching hospitals in Ontario, Canada. Semistructured interviews explored potential procedural variations arising during the observations and prompts from an iteratively refined guide. Ongoing data analysis refined the theoretical framework and informed data collection strategies, as prescribed by the iterative nature of grounded theory research. PARTICIPANTS Our sample included 99 hours of observation across 45 cases with 14 surgeons. Semistructured, audio-recorded interviews (n = 14) occurred immediately following observational periods. RESULTS Surgeons endorsed the use of intersurgeon procedural variations to teach residents about adapting to the complexity of surgical practice and the norms of surgical culture. Surgeons suggested that residents’ efforts to identify thresholds of principle and preference are crucial to professional development. Principles that emerged from the study included the following: (1) knowing what comes next, (2) choosing the right plane, (3) handling tissue appropriately, (4) recognizing the abnormal, and (5) making safe progress. Surgeons suggested that learning to follow these principles while maintaining key aspects of surgical culture, like autonomy and individuality, are important social processes in surgical education. CONCLUSIONS Acknowledging intersurgeon variation has important implications for curriculum development and workplace-based assessment in surgical education. Adapting to intersurgeon procedural variations may foster versatility in surgical residents. However, the existence of procedural variations and their active use in surgeons’ teaching raises questions about the lack of attention to this form of complexity in current workplace-based assessment strategies. Failure to recognize the role of such variations may threaten the implementation of competency-based medical education in surgery. PMID:26705062
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Test Operations Procedure (TOP) 06-2-301 Wind Testing
2017-06-14
critical to ensure that the test item is exposed to the required wind speeds. This may be an iterative process as the fan blade pitch, fan speed...fan speed is the variable that is adjusted to reach the required velocities. Calibration runs with a range of fan speeds are performed and a
Defensive Swarm: An Agent Based Modeling Analysis
2017-12-01
INITIAL ALGORITHM (SINGLE- RUN ) TESTING .........................43 1. Patrol Algorithm—Passive...scalability are therefore quite important to modeling in this highly variable domain. One can force the software to run the gamut of options to see...changes in operating constructs or procedures. Additionally, modelers can run thousands of iterations testing the model under different circumstances
Graphic Design Education: A Revised Assessment Approach to Encourage Deep Learning
ERIC Educational Resources Information Center
Ellmers, Grant; Foley, Marius; Bennett, Sue
2008-01-01
In this paper we outline the review and iterative refinement of assessment procedures in a final year graphic design subject at the University of Wollongong. Our aim is to represent the main issues in assessing graphic design work, and informed by the literature, particularly "notions of creativity" (Cowdroy & de Graaff, 2005), to…
Military Standard Common APSE (Ada Programming Support Environment) Interface Set (CAIS).
1985-01-01
QUEUEASE. LAST-KEY (QUEENAME) . LASTREI.TIONI(QUEUE-NAME). FILE-NODE. PORN . ATTRIBUTTES. ACCESSCONTROL. LEVEL); CLOSE (QUEUE BASE); CLOSE(FILE NODE...PROPOSED XIIT-STD-C.4 31 J NNUAfY logs procedure zTERT (ITERATOR: out NODE ITERATON; MAMIE: NAME STRING.KIND: NODE KID : KEY : RELATIONSHIP KEY PA1TTE1 :R
Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo
2014-09-01
Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Method for hyperspectral imagery exploitation and pixel spectral unmixing
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2003-01-01
An efficiently hybrid approach to exploit hyperspectral imagery and unmix spectral pixels. This hybrid approach uses a genetic algorithm to solve the abundance vector for the first pixel of a hyperspectral image cube. This abundance vector is used as initial state in a robust filter to derive the abundance estimate for the next pixel. By using Kalman filter, the abundance estimate for a pixel can be obtained in one iteration procedure which is much fast than genetic algorithm. The output of the robust filter is fed to genetic algorithm again to derive accurate abundance estimate for the current pixel. The using of robust filter solution as starting point of the genetic algorithm speeds up the evolution of the genetic algorithm. After obtaining the accurate abundance estimate, the procedure goes to next pixel, and uses the output of genetic algorithm as the previous state estimate to derive abundance estimate for this pixel using robust filter. And again use the genetic algorithm to derive accurate abundance estimate efficiently based on the robust filter solution. This iteration continues until pixels in a hyperspectral image cube end.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners
Chang, Jin Ho; Yen, Jesse T.; Shung, K. Kirk
2008-01-01
This paper presents a high-speed digital scan converter (DSC) capable of providing more than 400 images per second, which is necessary to examine the activities of the mouse heart whose rate is 5–10 beats per second. To achieve the desired high-speed performance in cost-effective manner, the DSC developed adopts a linear interpolation algorithm in which two nearest samples to each object pixel of a monitor are selected and only angular interpolation is performed. Through computer simulation with the Field II program, its accuracy was investigated by comparing it to that of bilinear interpolation known as the best algorithm in terms of accuracy and processing speed. The simulation results show that the linear interpolation algorithm is capable of providing an acceptable image quality, which means that the difference of the root mean square error (RMSE) values of the linear and bilinear interpolation algorithms is below 1 %, if the sample rate of the envelope samples is at least four times higher than the Nyquist rate for the baseband component of echo signals. The designed DSC was implemented with a single FPGA (Stratix EP1S60F1020C6, Altera Corporation, San Jose, CA) on a DSC board that is a part of a high-speed ultrasound imaging system developed. The temporal and spatial resolutions of the implemented DSC were evaluated by examining its maximum processing time with a time stamp indicating when an image is completely formed and wire phantom testing, respectively. The experimental results show that the implemented DSC is capable of providing images at the rate of 400 images per second with negligible processing error. PMID:18430449
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
Sorting of a multi-subunit ubiquitin ligase complex in the endolysosome system
Yang, Xi; Arines, Felichi Mae; Zhang, Weichao
2018-01-01
The yeast Dsc E3 ligase complex has long been recognized as a Golgi-specific protein ubquitination system. It shares a striking sequence similarity to the Hrd1 complex that plays critical roles in the ER-associated degradation pathway. Using biochemical purification and mass spectrometry, we identified two novel Dsc subunits, which we named as Gld1 and Vld1. Surprisingly, Gld1 and Vld1 do not coexist in the same complex. Instead, they compete with each other to form two functionally independent Dsc subcomplexes. The Vld1 subcomplex takes the AP3 pathway to reach the vacuole membrane, whereas the Gld1 subcomplex travels through the VPS pathway and is cycled between Golgi and endosomes by the retromer. Thus, instead of being Golgi-specific, the Dsc complex can regulate protein levels at three distinct organelles, namely Golgi, endosome, and vacuole. Our study provides a novel model of achieving multi-tasking for transmembrane ubiquitin ligases with interchangeable trafficking adaptors. PMID:29355480
Psychological stress during exercise: cardiorespiratory and hormonal responses.
Webb, Heather E; Weldy, Michael L; Fabianke-Kadue, Emily C; Orndorff, G R; Kamimori, Gary H; Acevedo, Edmund O
2008-12-01
The purpose of this study was to examine the cardiorespiratory (CR) and stress hormone responses to a combined physical and mental stress. Eight participants (VO2(max) = 41.24 +/- 6.20 ml kg(-1) min(-1)) completed two experimental conditions, a treatment condition including a 37 min ride at 60% of VO2(max) with participants responding to a computerized mental challenge dual stress condition (DSC) and a control condition of the same duration and intensity without the mental challenge exercise alone condition (EAC). Significant interactions across time were found for CR responses, with heart rate, ventilation, and respiration rate demonstrating higher increases in the DSC. Additionally, norepinephrine was significantly greater in the DSC at the end of the combined challenge. Furthermore, cortisol area-under-the-curve (AUC) was also significantly elevated during the DSC. These results demonstrate that a mental challenge during exercise can exacerbate the stress response, including the release of hormones that have been linked to negative health consequences (cardiovascular, metabolic, autoimmune illnesses).
Verevkin, Sergey P; Emel'yanenko, Vladimir N; Zaitsau, Dzmitry H; Ralys, Ricardas V; Schick, Christoph
2012-04-12
Differential scanning calorimetry (DSC) has been used to measure enthalpies of synthesis reactions of the 1-alkyl-3-methylimidazolium bromide [C(n)mim][Br] ionic liquids from 1-methylimidazole and n-alkyl bromides (with n = 4, 5, 6, 7, and 8). The optimal experimental conditions have been elaborated. Enthalpies of formation of these ionic liquids in the liquid state have been determined using the DSC results according to the Hess Law. The ideal-gas enthalpies of formation of [C(n)mim][Br] were calculated using the methods of quantum chemistry. They were used together with the DSC results to derive indirectly the enthalpies of vaporization of the ionic liquids under study. In order to validate the indirect determination, the experimental vaporization enthalpy of [C(4)mim][Br] was measured by using a quartz crystal microbalance (QCM). The combination of reaction enthalpy measurements by DSC with modern high-level first-principles calculations opens valuable indirect thermochemical options to obtain values of vaporization enthalpies of ionic liquids.
Non-resonant dynamic stark control of vibrational motion with optimized laser pulses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Esben F.; Henriksen, Niels E.
2016-06-28
The term dynamic Stark control (DSC) has been used to describe methods of quantum control related to the dynamic Stark effect, i.e., a time-dependent distortion of energy levels. Here, we employ analytical models that present clear and concise interpretations of the principles behind DSC. Within a linearly forced harmonic oscillator model of vibrational excitation, we show how the vibrational amplitude is related to the pulse envelope, and independent of the carrier frequency of the laser pulse, in the DSC regime. Furthermore, we shed light on the DSC regarding the construction of optimal pulse envelopes — from a time-domain as wellmore » as a frequency-domain perspective. Finally, in a numerical study beyond the linearly forced harmonic oscillator model, we show that a pulse envelope can be constructed such that a vibrational excitation into a specific excited vibrational eigenstate is accomplished. The pulse envelope is constructed such that high intensities are avoided in order to eliminate the process of ionization.« less
Kearns, Kenneth L; Swallen, Stephen F; Ediger, M D; Sun, Ye; Yu, Lian
2009-02-12
Indomethacin glasses of varying stabilities were prepared by physical vapor deposition onto substrates at 265 K. Enthalpy relaxation and the mobility onset temperature were assessed with differential scanning calorimetry (DSC). Quasi-isothermal temperature-modulated DSC was used to measure the reversing heat capacity during annealing above the glass transition temperature Tg. At deposition rates near 8 A/s, scanning DSC shows two enthalpy relaxation peaks and quasi-isothermal DSC shows a two-step change in the reversing heat capacity. We attribute these features to two distinct local packing structures in the vapor-deposited glass, and this interpretation is supported by the strong correlation between the two calorimetric signatures of the glass to liquid transformation. At lower deposition rates, a larger fraction of the sample is prepared in the more stable local packing. The transformation of the vapor-deposited glasses into the supercooled liquid above Tg is exceedingly slow, as much as 4500 times slower than the structural relaxation time of the liquid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polan, D; Kamp, J; Lee, JY
Purpose: To perform validation and commissioning of a commercial deformable image registration (DIR) algorithm (Velocity, Varian Medical Systems) for numerous clinical sites using single and multi-modality images. Methods: In this retrospective study, the DIR algorithm was evaluated for 10 patients in each of the following body sites: head and neck (HN), prostate, liver, and gynecological (GYN). HN DIRs were evaluated from planning (p)CT to re-pCT and pCTs to daily CBCTs using dice similarity coefficients (DSC) of corresponding anatomical structures. Prostate DIRs were evaluated from pCT to CBCTs using DSC and target registration error (TRE) of implanted RF beacons within themore » prostate. Liver DIRs were evaluated from pMR to pCT using DSC and TRE of vessel bifurcations. GYN DIRs were evaluated between fractionated brachytherapy MRIs using DSC of corresponding anatomical structures. Results: Analysis to date has given average DSCs for HN pCT-to-(re)pCT DIR for the brainstem, cochleas, constrictors, spinal canal, cord, esophagus, larynx, parotids, and submandibular glands as 0.88, 0.65, 0.67, 0.91, 0.77, 0.69, 0.77, 0.87, and 0.71, respectively. Average DSCs for HN pCT-to-CBCT DIR for the constrictors, spinal canal, esophagus, larynx, parotids, and submandibular glands were 0.64, 0.90, 0.62, 0.82, 0.75, and 0.69, respectively. For prostate pCT-to-CBCT DIR the DSC for the bladder, femoral heads, prostate, and rectum were 0.71, 0.82, 0.69, and 0.61, respectively. Average TRE using implanted beacons was 3.35 mm. For liver pCT-to-pMR, the average liver DSC was 0.94 and TRE was 5.26 mm. For GYN MR-to-MR DIR the DSC for the bladder, sigmoid colon, GTV, and rectum were 0.79, 0.58, 0.67, and 0.76, respectively. Conclusion: The Velocity DIR algorithm has been evaluated over a number of anatomical sites. This work functions to document the uncertainties in the DIR in the commissioning process so that these can be accounted for in the development of downstream clinical processes. This work was supported in part by a co-development agreement with Varian Medical Systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; Chang, T; Huang, K
2014-06-01
Purpose: This study aimed to evaluate the feasibility of using a short arterial spin labeling (ASL) scan for calibrating the dynamic susceptibility contrast- (DSC-) MRI in a group of patients with internal carotid artery stenosis. Methods: Six patients with unilateral ICA stenosis enrolled in the study on a 3T clinical MRI scanner. The ASL-cerebral blood flow (-CBF) maps were calculated by averaging different number of dynamic points (N=1-45) acquired by using a Q2TIPS sequence. For DSC perfusion analysis, arterial input function was selected to derive the relative cerebral blood flow (rCBF) map and the delay (Tmax) map. Patient-specific CF wasmore » calculated from the mean ASL- and DSC-CBF obtained from three different masks: (1)Tmax< 3s, (2)combined gray matter mask with mask 1, (3)mask 2 with large vessels removed. One CF value was created for each number of averages by using each of the three masks for calibrating the DSC-CBF map. The CF value of the largest number of averages (NL=45) was used to determine the acceptable range(< 10%, <15%, and <20%) of CF values corresponding to the minimally acceptable number of average (NS) for each patient. Results: Comparing DSC CBF maps corrected by CF values of NL (CBFL) in ACA, MCA and PCA territories, all masks resulted in smaller CBF on the ipsilateral side than the contralateral side of the MCA territory(p<.05). The values obtained from mask 1 were significantly different than the mask 3(p<.05). Using mask 3, the medium values of Ns were 4(<10%), 2(<15%) and 2(<20%), with the worst case scenario (maximum Ns) of 25, 4, and 4, respectively. Conclusion: This study found that reliable calibration of DSC-CBF can be achieved from a short pulsed ASL scan. We suggested use a mask based on the Tmax threshold, the inclusion of gray matter only and the exclusion of large vessels for performing the calibration.« less
Effect of additives on mineral trioxide aggregate setting reaction product formation.
Zapf, Angela M; Chedella, Sharath C V; Berzins, David W
2015-01-01
Mineral trioxide aggregate (MTA) sets via hydration of calcium silicates to yield calcium silicate hydrates and calcium hydroxide (Ca[OH]2). However, a drawback of MTA is its long setting time. Therefore, many additives have been suggested to reduce the setting time. The effect those additives have on setting reaction product formation has been ignored. The objective was to examine the effect additives have on MTA's setting time and setting reaction using differential scanning calorimetry (DSC). MTA powder was prepared with distilled water (control), phosphate buffered saline, 5% calcium chloride (CaCl2), 3% sodium hypochlorite (NaOCl), or lidocaine in a 3:1 mixture and placed in crucibles for DSC evaluation. The setting exothermic reactions were evaluated at 37°C for 8 hours to determine the setting time. Separate samples were stored and evaluated using dynamic DSC scans (37°C→640°C at10°C/min) at 1 day, 1 week, 1 month, and 3 months (n = 9/group/time). Dynamic DSC quantifies the reaction product formed from the amount of heat required to decompose it. Thermographic peaks were integrated to determine enthalpy, which was analyzed with analysis of variance/Tukey test (α = 0.05). Isothermal DSC identified 2 main exothermal peaks occurring at 44 ± 12 and 343 ± 57 minutes for the control. Only the CaCl2 additive was an accelerant, which was observed by a greater exothermic peak at 101 ± 11 minutes, indicating a decreased setting time. The dynamic DSC scans produced an endothermic peak around 450°C-550°C attributed to Ca(OH)2 decomposition. The use of a few additives (NaOCl and lidocaine) resulted in significantly less Ca(OH)2 product formation. DSC was used to discriminate calcium hydroxide formation in MTA mixed with various additives and showed NaOCl and lidocaine are detrimental to MTA reaction product formation, whereas CaCl2 accelerated the reaction. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Boguta, Patrycja; Sokołowska, Zofia; Skic, Kamil
2017-01-01
Thermogravimetry-coupled with differential scanning calorimetry, quadrupole mass spectrometry, and Fourier-transform infrared spectroscopy (TG-DSC-QMS-FTIR)-was applied to monitor the thermal stability (in an N2 pyrolytic atmosphere) and chemical properties of natural polymers, fulvic (FA) and humic acids (HA), isolated from chemically different soils. Three temperature ranges, R1, 40-220°C; R2, 220-430°C; and R3, 430-650°C, were distinguished from the DSC data, related to the main thermal processes of different structures (including transformations without weight loss). Weight loss (ΔM) estimated from TG curves at the above temperature intervals revealed distinct differences within the samples in the content of physically adsorbed water (at R1), volatile and labile functional groups (at R2) as well as recalcitrant and refractory structures (at R3). QMS and FTIR modules enabled the chemical identification (by masses and by functional groups, respectively) of gaseous species evolved during thermal decomposition at R1, R2 and R3. Variability in shape, area and temperature of TG, DSC, QMS and FTIR peaks revealed differences in thermal stability and chemical structure of the samples between the FAs and HAs fractions of different origin. The statistical analysis showed that the parameters calculated from QMS (areas of m/z = 16, 17, 18, 44), DSC (MaxDSC) and TG (ΔM) at R1, R2 and R3 correlated with selected chemical properties of the samples, such as N, O and COOH content as well as E2/E6 and E2/E4 indexes. This indicated a high potential for the coupled method to monitor the chemical changes of humic substances. A new humification parameter, HTD, based on simple calculations of weight loss at specific temperature intervals proved to be a good alternative to indexes obtained from other methods. The above findings showed that the TG-DSC-QMS-FTIR coupled technique can represent a useful tool for the comprehensive assessment of FAs and HAs properties related to their various origin.
Sokołowska, Zofia; Skic, Kamil
2017-01-01
Thermogravimetry–coupled with differential scanning calorimetry, quadrupole mass spectrometry, and Fourier-transform infrared spectroscopy (TG-DSC-QMS-FTIR)–was applied to monitor the thermal stability (in an N2 pyrolytic atmosphere) and chemical properties of natural polymers, fulvic (FA) and humic acids (HA), isolated from chemically different soils. Three temperature ranges, R1, 40–220°C; R2, 220–430°C; and R3, 430–650°C, were distinguished from the DSC data, related to the main thermal processes of different structures (including transformations without weight loss). Weight loss (ΔM) estimated from TG curves at the above temperature intervals revealed distinct differences within the samples in the content of physically adsorbed water (at R1), volatile and labile functional groups (at R2) as well as recalcitrant and refractory structures (at R3). QMS and FTIR modules enabled the chemical identification (by masses and by functional groups, respectively) of gaseous species evolved during thermal decomposition at R1, R2 and R3. Variability in shape, area and temperature of TG, DSC, QMS and FTIR peaks revealed differences in thermal stability and chemical structure of the samples between the FAs and HAs fractions of different origin. The statistical analysis showed that the parameters calculated from QMS (areas of m/z = 16, 17, 18, 44), DSC (MaxDSC) and TG (ΔM) at R1, R2 and R3 correlated with selected chemical properties of the samples, such as N, O and COOH content as well as E2/E6 and E2/E4 indexes. This indicated a high potential for the coupled method to monitor the chemical changes of humic substances. A new humification parameter, HTD, based on simple calculations of weight loss at specific temperature intervals proved to be a good alternative to indexes obtained from other methods. The above findings showed that the TG-DSC-QMS-FTIR coupled technique can represent a useful tool for the comprehensive assessment of FAs and HAs properties related to their various origin. PMID:29240819
Chiu, Michael H.; Prenner, Elmar J.
2011-01-01
Differential Scanning Calorimetry (DSC) is a highly sensitive technique to study the thermotropic properties of many different biological macromolecules and extracts. Since its early development, DSC has been applied to the pharmaceutical field with excipient studies and DNA drugs. In recent times, more attention has been applied to lipid-based drug delivery systems and drug interactions with biomimetic membranes. Highly reproducible phase transitions have been used to determine values, such as, the type of binding interaction, purity, stability, and release from a drug delivery mechanism. This review focuses on the use of DSC for biochemical and pharmaceutical applications. PMID:21430954
Moreno-Vásquez, María Jesús; Valenzuela-Buitimea, Emma Lucía; Plascencia-Jatomea, Maribel; Encinas-Encinas, José Carmelo; Rodríguez-Félix, Francisco; Sánchez-Valdes, Saúl; Rosas-Burgos, Ema Carina; Ocaño-Higuera, Víctor Manuel; Graciano-Verdugo, Abril Zoraida
2017-01-02
Chitosan was functionalized with epigallocatechin gallate (EGCG) by a free radical-induced grafting procedure, which was carried out by a redox pair (ascorbic acid/hydrogen peroxide) as the radical initiator. The successful preparation of EGCG grafted-chitosan was verified by spectroscopic (UV, FTIR and XPS) and thermal (DSC and TGA) analyses. The degree of grafting of phenolic compounds onto the chitosan was determined by the Folin-Ciocalteu procedure. Additionally, the biological activities (antioxidant and antibacterial) of pure EGCG, blank chitosan and EGCG grafted-chitosan were evaluated. The spectroscopic and thermal results indicate chitosan functionalization with EGCG; the EGCG content was 25.8mg/g of EGCG grafted-chitosan. The antibacterial activity of the EGCG grafted-chitosan was increased compared to pure EGCG or blank chitosan against S. aureus and Pseudomonas sp. (p<0.05). Additionally, EGCG grafted-chitosan showed higher antioxidant activity than blank chitosan. These results indicate that EGCG grafted-chitosan might be useful in active food packaging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dynamic Relaxational Behaviour of Hyperbranched Polyether Polyols
NASA Astrophysics Data System (ADS)
Navarro-Gorris, A.; Garcia-Bernabé, A.; Stiriba, S.-E.
2008-08-01
Hyperbranched polymers are highly cascade branched polymers easily accessible via one-pot procedure from ABm type monomers. A key property of hyperbranched polymers is their molecular architecture, which allows core-shell morphology to be manipulated for further specific applications in material and medical sciences. Since the discovery of hyperbranched polymer materials, an increasing number of reports have been published describing synthetic procedures and technological applications of such materials, but their physical properties have remained less studied until the last decade. In the present work, different esterified hyperbranched polyglycerols have been prepared starting from polyglycerol precursors in presence of acetic acid, thus generating functionalization degree with range from 0 to 94%. Thermal analysis of the obtained samples has been studied by Differential Scanning Calorimetry (DSC). Dielectric Spectroscopy measurements have been analyzed by combining loss spectra deconvolution with the modulus formalism. In this regard, all acetylated polyglycerols exhibited a main relaxation related to the glass transition (α process) and two sub-glassy relaxations (β and γ processes) which vanish at high functionalization degrees.
Determining the main thermodynamic parameters of caffeine melting by means of DSC
NASA Astrophysics Data System (ADS)
Agafonova, E. V.; Moshchenskii, Yu. V.; Tkachenko, M. L.
2012-06-01
The temperature and enthalpy of the melting of caffeine, which are 235.5 ± 0.1°C and 19.6 ± 0.2 kJ/mol, respectively, are determined by DSC. The melting entropy and the cryoscopic constant of caffeine are calculated.
47 CFR 80.225 - Requirements for selective calling equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... manufacture, importation, sale or installation of non-portable DSC equipment that does not comply with either..., 2011. (5) The manufacture, importation, or sale of handheld, portable DSC equipment that does not... to establish or maintain communications provided that: (i) These signalling techniques are not used...
ERIC Educational Resources Information Center
D'Amico, Teresa; Donahue, Craig J.; Rais, Elizabeth A.
2008-01-01
This lab experiment illustrates the use of differential scanning calorimetry (DSC) and thermal gravimetric analysis (TGA) in the measurement of polymer properties. A total of seven exercises are described. These are dry exercises: students interpret previously recorded scans. They do not perform the experiments. DSC was used to determine the…
Yuan, Xiaoda; Carter, Brady P; Schmidt, Shelly J
2011-01-01
Similar to an increase in temperature at constant moisture content, water vapor sorption by an amorphous glassy material at constant temperature causes the material to transition into the rubbery state. However, comparatively little research has investigated the measurement of the critical relative humidity (RHc) at which the glass transition occurs at constant temperature. Thus, the central objective of this study was to investigate the relationship between the glass transition temperature (Tg), determined using thermal methods, and the RHc obtained using an automatic water vapor sorption instrument. Dynamic dewpoint isotherms were obtained for amorphous polydextrose from 15 to 40 °C. RHc was determined using an optimized 2nd-derivative method; however, 2 simpler RHc determination methods were also tested as a secondary objective. No statistical difference was found between the 3 RHc methods. Differential scanning calorimetry (DSC) Tg values were determined using polydextrose equilibrated from 11.3% to 57.6% RH. Both standard DSC and modulated DSC (MDSC) methods were employed, since some of the polydextrose thermograms exhibited a physical aging peak. Thus, a tertiary objective was to compare Tg values obtained using 3 different methods (DSC first scan, DSC rescan, and MDSC), to determine which method(s) yielded the most accurate Tg values. In general, onset and midpoint DSC first scan and MDSC Tg values were similar, whereas onset and midpoint DSC rescan values were different. State diagrams of RHc and experimental temperature and Tg and %RH were compared. These state diagrams, though obtained via very different methods, showed relatively good agreement, confirming our hypothesis that water vapor sorption isotherms can be used to directly detect the glassy to rubbery transition. Practical Application: The food polymer science (FPS) approach, pioneered by Slade and Levine, is being successfully applied in the food industry for understanding, improving, and developing food processes and products. However, despite its extreme usefulness, the Tg, a key element of the FPS approach, remains a challenging parameter to routinely measure in amorphous food materials, especially complex materials. This research demonstrates that RHc values, obtained at constant temperature using an automatic water vapor sorption instrument, can be used to detect the glassy to rubbery transition and are similar to the Tg values obtained at constant %RH, especially considering the very different approaches of these 2 methods--a transition from surface adsorption to bulk absorption (water vapor sorption) versus a step change in the heat capacity (DSC thermal method).
Serious game training improves performance in combat life-saving interventions.
Planchon, Jerome; Vacher, Anthony; Comblet, Jeremy; Rabatel, Eric; Darses, Françoise; Mignon, Alexandre; Pasquier, Pierre
2018-01-01
In modern warfare, almost 25% of combat-related deaths are considered preventable if life-saving interventions are performed. Therefore, Tactical Combat Casualty Care (TCCC) training for soldiers is a major challenge. In 2014, the French Military Medical Service supported the development of 3D-SC1 ® , a serious game designed for the French TCCC program, entitled Sauvetage au Combat de niveau 1 (SC1). Our study aimed to evaluate the impact on performance of additional training with 3D-SC1 ® . The study assessed the performance of soldiers randomly assigned to one of two groups, before (measure 1) and after (measure 2) receiving additional training. This training involved either 3D-SC1 ® (Intervention group), or a DVD (Control group). The principal measure was the individual performance (on a 16-point scale), assessed by two investigators during a hands-on simulation. First, the mean performance score was compared between the two measures for Intervention and Control groups using a two-tailed paired t-test. Second, a multivariable linear regression was used to determine the difference in the impacts of 3D-SC1 ® and DVD training, and the order of presentation of the two scenarios, on the mean change from baseline in performance scores. A total of 96 subjects were evaluated: seven could not be followed-up, while 50 were randomly allocated to the Intervention group, and 39 to the Control group. Between measure 1 and measure 2, the mean (SD) performance score increased from 9.9 (3.13) to 14.1 (1.23), and from 9.4 (2.97) to 12.5 (1.83), for the Intervention group and Control group, respectively (p<0.0001). The adjusted mean difference in performance scores between 3D-SC1 ® and DVD training was 1.1 (95% confidence interval -0.3, 2.5) (p=0.14). Overall, the study found that supplementing SC1 training with either 3D-SC1 ® or DVD improved performance, assessed by a hands-on simulation. However, our analysis did not find a statistically significant difference between the effects of these two training tools. 3D-SC1 ® could be an efficient and pedagogical tool to train soldiers in life-saving interventions. In the current context of terrorist threat, a specifically-adapted version of 3D-SC1 ® may be a cost-effective and engaging way to train a large civilian public. Copyright © 2017 Elsevier Ltd. All rights reserved.
Friend, Milton; Franson, J. Christian; Friend, Milton; Gibbs, Samantha E.J.; Wild, Margaret A.
2015-01-01
This is the third iteration of the National Wildlife Health Center's (NWHC) field guide developed primarily to assist field managers and biologists address diseases they encounter. By itself, the first iteration, “Field Guide of Wildlife Diseases: General Field Procedures and Diseases of Migratory Birds,” was simply another addition to an increasing array of North American field guides and other publications focusing on disease in free-ranging wildlife populations. Collectively, those publications were reflecting the ongoing transition in the convergence of wildlife management and wildlife disease as foundational components within the structure of wildlife conservation as a social enterprise serving the stewardship of our wildlife resources. For context, it is useful to consider those publications relative to a timeline of milestones involving the evolution of wildlife conservation in North America.
NASA Technical Reports Server (NTRS)
Maskew, B.
1982-01-01
VSAERO is a computer program used to predict the nonlinear aerodynamic characteristics of arbitrary three-dimensional configurations in subsonic flow. Nonlinear effects of vortex separation and vortex surface interaction are treated in an iterative wake-shape calculation procedure, while the effects of viscosity are treated in an iterative loop coupling potential-flow and integral boundary-layer calculations. The program employs a surface singularity panel method using quadrilateral panels on which doublet and source singularities are distributed in a piecewise constant form. This user's manual provides a brief overview of the mathematical model, instructions for configuration modeling and a description of the input and output data. A listing of a sample case is included.
Feature Based Retention Time Alignment for Improved HDX MS Analysis
NASA Astrophysics Data System (ADS)
Venable, John D.; Scuba, William; Brock, Ansgar
2013-04-01
An algorithm for retention time alignment of mass shifted hydrogen-deuterium exchange (HDX) data based on an iterative distance minimization procedure is described. The algorithm performs pairwise comparisons in an iterative fashion between a list of features from a reference file and a file to be time aligned to calculate a retention time mapping function. Features are characterized by their charge, retention time and mass of the monoisotopic peak. The algorithm is able to align datasets with mass shifted features, which is a prerequisite for aligning hydrogen-deuterium exchange mass spectrometry datasets. Confidence assignments from the fully automated processing of a commercial HDX software package are shown to benefit significantly from retention time alignment prior to extraction of deuterium incorporation values.
NASA Technical Reports Server (NTRS)
Strong, James P.
1987-01-01
A local area matching algorithm was developed on the Massively Parallel Processor (MPP). It is an iterative technique that first matches coarse or low resolution areas and at each iteration performs matches of higher resolution. Results so far show that when good matches are possible in the two images, the MPP algorithm matches corresponding areas as well as a human observer. To aid in developing this algorithm, a control or shell program was developed for the MPP that allows interactive experimentation with various parameters and procedures to be used in the matching process. (This would not be possible without the high speed of the MPP). With the system, optimal techniques can be developed for different types of matching problems.
AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems
NASA Astrophysics Data System (ADS)
Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu
2017-10-01
In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.
Krch, Denise; Lequerica, Anthony; Arango-Lasprilla, Juan Carlos; Rogers, Heather L; DeLuca, John; Chiaravalloti, Nancy D
2015-01-01
The purpose of the current study was to evaluate the relative contribution of acculturation to two tests of nonverbal test performance in Hispanics. This study compared 40 Hispanic and 20 non-Hispanic whites on Digit Symbol-Coding (DSC) and the Wisconsin Card Sorting Test (WCST) and evaluated the relative contribution of the various acculturation components to cognitive test performance in the Hispanic group. Hispanics performed significantly worse on DSC and WCST relative to non-Hispanic whites. Multiple regressions conducted within the Hispanic group revealed that language use uniquely accounted for 11.0% of the variance on the DSC, 18.8% of the variance on WCST categories completed, and 13.0% of the variance in perseverative errors on the WCST. Additionally, years of education in the United States uniquely accounted for 14.9% of the variance in DSC. The significant impact of acculturation on DSC and WCST lends support that nonverbal cognitive tests are not necessarily culture free. The differential contribution of acculturation proxies highlights the importance of considering these separate components when interpreting performance on neuropsychological tests in clinical and research settings. Factors, such as the country where education was received, may in fact be more meaningful information than the years of education of education attained. Thus, acculturation should be considered an important factor in any cognitive evaluation of culturally diverse individuals.
Crystallization processes in Ge{sub 2}Sb{sub 2}Se{sub 4}Te glass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svoboda, Roman, E-mail: roman.svoboda@upce.cz; Bezdička, Petr; Gutwirth, Jan
2015-01-15
Highlights: • Crystallization kinetics of Ge{sub 2}Sb{sub 2}Se{sub 4}Te glass was studied in dependence on particle size by DSC. • All studied fractions were described in terms of the SB autocatalytic model. • Relatively high amount of Te enhances manifestation of bulk crystallization mechanisms. • XRD analysis of samples crystallized under different conditions showed correlation with DSC data. • XRD analysis revealed a new crystallization mechanism indistinguishable by DSC. - Abstract: Differential scanning calorimetry (DSC) and X-ray diffraction (XRD) analysis were used to study crystallization in Ge{sub 2}Sb{sub 2}Se{sub 4}Te glass under non-isothermal conditions as a function of the particlemore » size. The crystallization kinetics was described in terms of the autocatalytic Šesták–Berggren model. An extensive discussion of all aspects of a full-scale kinetic study of a crystallization process was undertaken. Dominance of the crystallization process originating from mechanically induced strains and heterogeneities was confirmed. Substitution of Se by Te was found to enhance the manifestation of the bulk crystallization mechanisms (at the expense of surface crystallization). The XRD analysis showed significant dependence of the crystalline structural parameters on the crystallization conditions (initial particle size of the glassy grains and applied heating rate). Based on this information, a new microstructural crystallization mechanism, indistinguishable by DSC, was proposed.« less
Inverse boundary-layer theory and comparison with experiment
NASA Technical Reports Server (NTRS)
Carter, J. E.
1978-01-01
Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.
Optimization of flexible wing structures subject to strength and induced drag constraints
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1977-01-01
An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.
Contact stresses in gear teeth: A new method of analysis
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.
1991-01-01
A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.
Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1986-01-01
A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hanming; Wang, Linyuan; Li, Lei
2016-06-15
Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less
Sinha, Michael S; Freifeld, Clark C; Brownstein, John S; Donneyong, Macarius M; Rausch, Paula; Lappin, Brian M; Zhou, Esther H; Dal Pan, Gerald J; Pawar, Ajinkya M; Hwang, Thomas J; Avorn, Jerry
2018-01-01
Background The Food and Drug Administration (FDA) issues drug safety communications (DSCs) to health care professionals, patients, and the public when safety issues emerge related to FDA-approved drug products. These safety messages are disseminated through social media to ensure broad uptake. Objective The objective of this study was to assess the social media dissemination of 2 DSCs released in 2013 for the sleep aid zolpidem. Methods We used the MedWatcher Social program and the DataSift historic query tool to aggregate Twitter and Facebook posts from October 1, 2012 through August 31, 2013, a period beginning approximately 3 months before the first DSC and ending 3 months after the second. Posts were categorized as (1) junk, (2) mention, and (3) adverse event (AE) based on a score between –0.2 (completely unrelated) to 1 (perfectly related). We also looked at Google Trends data and Wikipedia edits for the same time period. Google Trends search volume is scaled on a range of 0 to 100 and includes “Related queries” during the relevant time periods. An interrupted time series (ITS) analysis assessed the impact of DSCs on the counts of posts with specific mention of zolpidem-containing products. Chow tests for known structural breaks were conducted on data from Twitter, Facebook, and Google Trends. Finally, Wikipedia edits were pulled from the website’s editorial history, which lists all revisions to a given page and the editor’s identity. Results In total, 174,286 Twitter posts and 59,641 Facebook posts met entry criteria. Of those, 16.63% (28,989/174,286) of Twitter posts and 25.91% (15,453/59,641) of Facebook posts were labeled as junk and excluded. AEs and mentions represented 9.21% (16,051/174,286) and 74.16% (129,246/174,286) of Twitter posts and 5.11% (3,050/59,641) and 68.98% (41,138/59,641) of Facebook posts, respectively. Total daily counts of posts about zolpidem-containing products increased on Twitter and Facebook on the day of the first DSC; Google searches increased on the week of the first DSC. ITS analyses demonstrated variability but pointed to an increase in interest around the first DSC. Chow tests were significant (P<.0001) for both DSCs on Facebook and Twitter, but only the first DSC on Google Trends. Wikipedia edits occurred soon after each DSC release, citing news articles rather than the DSC itself and presenting content that needed subsequent revisions for accuracy. Conclusions Social media offers challenges and opportunities for dissemination of the DSC messages. The FDA could consider strategies for more actively disseminating DSC safety information through social media platforms, particularly when announcements require updating. The FDA may also benefit from directly contributing content to websites like Wikipedia that are frequently accessed for drug-related information. PMID:29305342
47 CFR 80.179 - Unattended operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... DSC in accordance with ITU-R Recommendation M.493-11, “Digital Selective-calling System for Use in the...., Washington, DC (Reference Information Center) or at the National Archives and Records Administration (NARA... condition related to ship safety. (3) The “ROUTINE” DSC category must be used. (4) Communications must be...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2013 CFR
2013-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2012 CFR
2012-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
47 CFR 80.359 - Frequencies for digital selective calling (DSC).
Code of Federal Regulations, 2014 CFR
2014-10-01
... calling frequencies for use by authorized ship and coast stations for general purpose DSC. There are three.... The “Series A” designation includes coast stations along, and ship stations in, the Atlantic Ocean... location of the called station and propagation conditions. Acknowledgement is made on the paired frequency...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katerska, B.; Krasteva, M.; Perez, E.
2007-04-23
Real-time small and wide angle X-ray scattering as well as DSC studies were carried out in order to analyzes the structure and phase transitions of liquid crystalline thermotropic poly(methylene p,p' bibenzoat)
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
Detecting Aberrant Response Patterns in the Rasch Model. Rapport 87-3.
ERIC Educational Resources Information Center
Kogut, Jan
In this paper, the detection of response patterns aberrant from the Rasch model is considered. For this purpose, a new person fit index, recently developed by I. W. Molenaar (1987) and an iterative estimation procedure are used in a simulation study of Rasch model data mixed with aberrant data. Three kinds of aberrant response behavior are…
NASA Astrophysics Data System (ADS)
Holota, P.; Nesvadba, O.
2016-12-01
The mathematical apparatus currently applied for geopotential determination is undoubtedly quite developed. This concerns numerical methods as well as methods based on classical analysis, equally as classical and weak solution concepts. Nevertheless, the nature of the real surface of the Earth has its specific features and is still rather complex. The aim of this paper is to consider these limits and to seek a balance between the performance of an apparatus developed for the surface of the Earth smoothed (or simplified) up to a certain degree and an iteration procedure used to bridge the difference between the real and smoothed topography. The approach is applied for the solution of the linear gravimetric boundary value problem in geopotential determination. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. As examples the use of modified spherical and also modified ellipsoidal coordinates for the transformation of the solution domain is discussed. However, the complexity of the boundary is then reflected in the structure of Laplace's operator. This effect is taken into account by means of successive approximations. The structure of the respective iteration steps is derived and analyzed. On the level of individual iteration steps the attention is paid to the representation of the solution in terms of function bases or in terms of Green's functions. The convergence of the procedure and the efficiency of its use for geopotential determination is discussed.
Wullenweber, Andrea; Kroner, Oliver; Kohrman, Melissa; Maier, Andrew; Dourson, Michael; Rak, Andrew; Wexler, Philip; Tomljanovic, Chuck
2008-11-15
The rate of chemical synthesis and use has outpaced the development of risk values and the resolution of risk assessment methodology questions. In addition, available risk values derived by different organizations may vary due to scientific judgments, mission of the organization, or use of more recently published data. Further, each organization derives values for a unique chemical list so it can be challenging to locate data on a given chemical. Two Internet resources are available to address these issues. First, the International Toxicity Estimates for Risk (ITER) database (www.tera.org/iter) provides chronic human health risk assessment data from a variety of organizations worldwide in a side-by-side format, explains differences in risk values derived by different organizations, and links directly to each organization's website for more detailed information. It is also the only database that includes risk information from independent parties whose risk values have undergone independent peer review. Second, the Risk Information Exchange (RiskIE) is a database of in progress chemical risk assessment work, and includes non-chemical information related to human health risk assessment, such as training modules, white papers and risk documents. RiskIE is available at http://www.allianceforrisk.org/RiskIE.htm, and will join ITER on National Library of Medicine's TOXNET (http://toxnet.nlm.nih.gov/). Together, ITER and RiskIE provide risk assessors essential tools for easily identifying and comparing available risk data, for sharing in progress assessments, and for enhancing interaction among risk assessment groups to decrease duplication of effort and to harmonize risk assessment procedures across organizations.
NASA Astrophysics Data System (ADS)
Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor
2004-02-01
A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.
Finite element analysis of heat load of tungsten relevant to ITER conditions
NASA Astrophysics Data System (ADS)
Zinovev, A.; Terentyev, D.; Delannay, L.
2017-12-01
A computational procedure is proposed in order to predict the initiation of intergranular cracks in tungsten with ITER specification microstructure (i.e. characterised by elongated micrometre-sized grains). Damage is caused by a cyclic heat load, which emerges from plasma instabilities during operation of thermonuclear devices. First, a macroscopic thermo-mechanical simulation is performed in order to obtain temperature- and strain field in the material. The strain path is recorded at a selected point of interest of the macroscopic specimen, and is then applied at the microscopic level to a finite element mesh of a polycrystal. In the microscopic simulation, the stress state at the grain boundaries serves as the marker of cracking initiation. The simulated heat load cycle is a representative of edge-localized modes, which are anticipated during normal operations of ITER. Normal stresses at the grain boundary interfaces were shown to strongly depend on the direction of grain orientation with respect to the heat flux direction and to attain higher values if the flux is perpendicular to the elongated grains, where it apparently promotes crack initiation.
NASA Astrophysics Data System (ADS)
Qiu, Mo; Yu, Simin; Wen, Yuqiong; Lü, Jinhu; He, Jianbin; Lin, Zhuosheng
In this paper, a novel design methodology and its FPGA hardware implementation for a universal chaotic signal generator is proposed via the Verilog HDL fixed-point algorithm and state machine control. According to continuous-time or discrete-time chaotic equations, a Verilog HDL fixed-point algorithm and its corresponding digital system are first designed. In the FPGA hardware platform, each operation step of Verilog HDL fixed-point algorithm is then controlled by a state machine. The generality of this method is that, for any given chaotic equation, it can be decomposed into four basic operation procedures, i.e. nonlinear function calculation, iterative sequence operation, iterative values right shifting and ceiling, and chaotic iterative sequences output, each of which corresponds to only a state via state machine control. Compared with the Verilog HDL floating-point algorithm, the Verilog HDL fixed-point algorithm can save the FPGA hardware resources and improve the operation efficiency. FPGA-based hardware experimental results validate the feasibility and reliability of the proposed approach.
NASA Astrophysics Data System (ADS)
Zhang, Yu-Yu; Chen, Xiang-You
2017-12-01
An unexplored nonperturbative deep strong coupling (npDSC) achieved in superconducting circuits has been studied in the anisotropic Rabi model by the generalized squeezing rotating-wave approximation. Energy levels are evaluated analytically from the reformulated Hamiltonian and agree well with numerical ones in a wide range of coupling strength. Such improvement ascribes to deformation effects in the displaced-squeezed state presented by the squeezed momentum variance, which are omitted in previous displaced states. The atom population dynamics confirms the validity of our approach for the npDSC strength. Our approach offers the possibility to explore interesting phenomena analytically in the npDSC regime in qubit-oscillator experiments.
Summary of Results from the Mars Phoenix Lander's Thermal Evolved Gas Analyzer
NASA Technical Reports Server (NTRS)
Sutter, B.; Ming, D. W.; Boynton, W. V.; Niles, P. B.; Hoffman, J.; Lauer, H. V.; Golden, D. C.
2009-01-01
The Mars Phoenix Scout Mission with its diverse instrument suite successfully examined several soils on the Northern plains of Mars. The Thermal and Evolved Gas Analyzer (TEGA) was employed to detect evolved volatiles and organic and inorganic materials by coupling a differential scanning calorimeter (DSC) with a magnetic-sector mass spectrometer (MS) that can detect masses in the 2 to 140 dalton range [1]. Five Martian soils were individually heated to 1000 C in the DSC ovens where evolved gases from mineral decompostion products were examined with the MS. TEGA s DSC has the capability to detect endothermic and exothermic reactions during heating that are characteristic of minerals present in the Martian soil.
Drill Sergeant Candidate Transformation
2009-02-01
leadership styles of NCOs entering Drill Sergeant School (DSS). ARI also developed and administered a prototype DS Assessment Battery to assess...preferred leadership styles . DSS training increases both the degree to which the DSC feels obligated to and identifies with the Army. DSS training...4 TABLE 3. PREFERRED LEADERSHIP STYLES DEFINITIONS .............................................6 TABLE 4. DSC CHANGE IN
Structural basis of host recognition and biofilm formation by Salmonella Saf pili
2017-01-01
Pili are critical in host recognition, colonization and biofilm formation during bacterial infection. Here, we report the crystal structures of SafD-dsc and SafD-SafA-SafA (SafDAA-dsc) in Saf pili. Cell adherence assays show that SafD and SafA are both required for host recognition, suggesting a poly-adhesive mechanism for Saf pili. Moreover, the SafDAA-dsc structure, as well as SAXS characterization, reveals an unexpected inter-molecular oligomerization, prompting the investigation of Saf-driven self-association in biofilm formation. The bead/cell aggregation and biofilm formation assays are used to demonstrate the novel function of Saf pili. Structure-based mutants targeting the inter-molecular hydrogen bonds and complementary architecture/surfaces in SafDAA-dsc dimers significantly impaired the Saf self-association activity and biofilm formation. In summary, our results identify two novel functions of Saf pili: the poly-adhesive and self-associating activities. More importantly, Saf-Saf structures and functional characterizations help to define a pili-mediated inter-cellular oligomerizaiton mechanism for bacterial aggregation, colonization and ultimate biofilm formation. PMID:29125121
NASA Astrophysics Data System (ADS)
Peng, Yongli; Xiao, Wenzheng
2017-06-01
A novel curing agent Thoreau modified 3, 5-Dimethyl-thioltoluenediamine was synthesized and its molecular structure was characterized by FTIR and DSC. The curing kinetics of a high toughness and low volume shrinkage ratio epoxy system (modified DMTDA/DGEBA) was studied by differential scanning calorimetry (DSC) under noni so thermal conditions. The data were fitted to an order model and autocatalytic model respectively. The results indicate that in order model deviates significantly from experimental data. Malik’s method was used to prove that the curing kinetics of the system concerned follow single-step autocatalytic model, and a “single-point model-free” approach was employed to calculate meaningful kinetic parameters. The DSC curves derived from autocatalytic model gave satisfactory agreement with that of experiment in the range 5K/min∼25K/min. As the heating rate increased, the predicted DSC curves deviated from experimental curves, and the total exothermic enthalpy declined owing to the transition of competition relationship between kinetics control and diffusion control.
Ford, J L
1999-03-15
This review focuses on the thermal analysis of hydroxypropylmethylcellulose (HPMC) and methylcellulose. Differential scanning calorimetry (DSC) of their powders is used to determine temperatures of moisture loss (in conjunction with thermogravimetric analysis) and glass transition temperatures. However, sample preparation and encapsulation affect the values obtained. The interaction of these cellulose ethers with water is evaluated by DSC. Water is added to the powder directly in DSC pans or preformed gels can be evaluated. Data quality depends on previous thermal history but estimates of the quantity of water bound to the polymers may be made. Water uptake by cellulose ethers may be evaluated by the use of polymeric wafers and by following loss of free water, over a series of timed curves, into wafers in contact with water. Cloud points, which assess the reduction of polymer solubility with increase of temperature, may be assessed spectrophotometrically. DSC and rheometric studies are used to follow thermogelation, a process involving hydrophobic interaction between partly hydrated polymeric chains. The advantages and disadvantages of the various methodologies are highlighted. Copyright.
Feature Screening in Ultrahigh Dimensional Cox's Model.
Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne
Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.
Pizzini, Francesca B; Farace, Paolo; Manganotti, Paolo; Zoccatelli, Giada; Bongiovanni, Luigi G; Golay, Xavier; Beltramello, Alberto; Osculati, Antonio; Bertini, Giuseppe; Fabene, Paolo F
2013-07-01
Non-invasive pulsed arterial spin labeling (PASL) MRI is a method to study brain perfusion that does not require the administration of a contrast agent, which makes it a valuable diagnostic tool as it reduces cost and side effects. The purpose of the present study was to establish the viability of PASL as an alternative to dynamic susceptibility contrast (DSC-MRI) and other perfusion imaging methods in characterizing changes in perfusion patterns caused by seizures in epileptic patients. We evaluated 19 patients with PASL. Of these, the 9 affected by high-frequency seizures were observed during the peri-ictal period (within 5hours since the last seizure), while the 10 patients affected by low-frequency seizures were observed in the post-ictal period. For comparison, 17/19 patients were also evaluated with DSC-MRI and CBF/CBV. PASL imaging showed focal vascular changes, which allowed the classification of patients in three categories: 8 patients characterized by increased perfusion, 4 patients with normal perfusion and 7 patients with decreased perfusion. PASL perfusion imaging findings were comparable to those obtained by DSC-MRI. Since PASL is a) sensitive to vascular alterations induced by epileptic seizures, b) comparable to DSC-MRI for detecting perfusion asymmetries, c) potentially capable of detecting time-related perfusion changes, it can be recommended for repeated evaluations, to identify the epileptic focus, and in follow-up and/or therapy-response assessment. Copyright © 2013 Elsevier Inc. All rights reserved.
Hermayer, Kathie L
2016-04-01
Diabetes is a major public health problem in South Carolina; however, the Diabetes Initiative of South Carolina (DSC) provides a realistic mechanism to address issues on a statewide basis. The Diabetes Center of Excellence in the DSC provides oversight for developing and supervising professional education programs for health care workers of all types in South Carolina to increase their knowledge and ability to care for people with diabetes. The DSC has developed many programs for the education of a variety of health professionals about diabetes and its complications. The DSC has sponsored 21 Annual Diabetes Fall Symposia for primary health care professionals featuring education regarding many aspects of diabetes mellitus. The intent of the program is to enhance the lifelong learning process of physicians, advanced practice providers, nurses, pharmacists, dietitians, laboratorians and other health care professionals, by providing educational opportunities and to advance the quality and safety of patient care. The symposium is an annual 2-day statewide program that supplies both a comprehensive diabetes management update to all primary care professionals and an opportunity for attendees to obtain continuing education credits at a low cost. The overarching goal of the DSC is that the programs it sponsors and the development of new targeted initiatives will lead to continuous improvements in the care of people at risk and with diabetes along with a decrease in morbidity, mortality and costs of diabetes and its complications in South Carolina and elsewhere. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paroli, R.M.; Penn, J.
1994-09-01
Two ethylene-propylene-diene monomer (EPDM) roofing membranes were aged at 100 C for 7 and 28 days. The T{sub g} of these membranes was then determined by dynamic mechanical analysis (DMA), thermomechanical analysis (TMA), and differential scanning calorimetry (DSC) and the results compared. It was found that: (1) T{sub g} data can be obtained easily using the DMA and TMA techniques. The DSC method requires greater care due to the broad step change in the baseline which is associated with heavily plasticized materials. (2) The closest correspondence between techniques was for TMA and DSC (half-height). The latter, within experimental error, yieldedmore » the same glass transition temperature before and after heat-aging. (3) The peak maxima associated with tan{delta} and E{double_prime} measurements should be cited with T{sub g} values as significant differences can exist. (4) The T{sub g}(E{double_prime}) values were closer to the T{sub g}(TMA) and T{sub g}(DSC) data than were the T{sub g}(tan{delta}) values. Data obtained at 1 Hz (or possibly less) should be used when making comparisons based on various techniques. An assessment of T{sub g} values indicated that EPDM 112 roofing membrane is more stable than the EPDM 111 membrane. The T{sub g} for EPDM 112 did not change significantly with heat-aging for 28 days at 130 C.« less
Calculation of the angular radiance distribution for a coupled atmosphere and canopy
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1993-01-01
The radiative transfer equations for a coupled atmosphere and canopy are solved numerically by an improved Gauss-Seidel iteration algorithm. The radiation field is decomposed into three components: unscattered sunlight, single scattering, and multiple scattering radiance for which the corresponding equations and boundary conditions are set up and their analytical or iterational solutions are explicitly derived. The classic Gauss-Seidel algorithm has been widely applied in atmospheric research. This is its first application for calculating the multiple scattering radiance of a coupled atmosphere and canopy. This algorithm enables us to obtain the internal radiation field as well as radiances at boundaries. Any form of bidirectional reflectance distribution function (BRDF) as a boundary condition can be easily incorporated into the iteration procedure. The hotspot effect of the canopy is accommodated by means of the modification of the extinction coefficients of upward single scattering radiation and unscattered sunlight using the formulation of Nilson and Kuusk. To reduce the computation for the case of large optical thickness, an improved iteration formula is derived to speed convergence. The upwelling radiances have been evaluated for different atmospheric conditions, leaf area index (LAI), leaf angle distribution (LAD), leaf size and so on. The formulation presented in this paper is also well suited to analyze the relative magnitude of multiple scattering radiance and single scattering radiance in both the visible and near infrared regions.
Human Factors Assessment and Redesign of the ISS Respiratory Support Pack (RSP) Cue Card
NASA Technical Reports Server (NTRS)
Byrne, Vicky; Hudy, Cynthia; Whitmore, Mihriban; Smith, Danielle
2007-01-01
The Respiratory Support Pack (RSP) is a medical pack onboard the International Space Station (ISS) that contains much of the necessary equipment for providing aid to a conscious or unconscious crewmember in respiratory distress. Inside the RSP lid pocket is a 5.5 by 11 inch paper procedural cue card, which is used by a Crew Medical Officer (CMO) to set up the equipment and deliver oxygen to a crewmember. In training, crewmembers expressed concerns about the readability and usability of the cue card; consequently, updating the cue card was prioritized as an activity to be completed. The Usability Testing and Analysis Facility at the Johnson Space Center (JSC) evaluated the original layout of the cue card, and proposed several new cue card designs based on human factors principles. The approach taken for the assessment was an iterative process. First, in order to completely understand the issues with the RSP cue card, crewmember post training comments regarding the RSP cue card were taken into consideration. Over the course of the iterative process, the procedural information was reorganized into a linear flow after the removal of irrelevant (non-emergency) content. Pictures, color coding, and borders were added to highlight key components in the RSP to aid in quickly identifying those components. There were minimal changes to the actual text content. Three studies were conducted using non-medically trained JSC personnel (total of 34 participants). Non-medically trained personnel participated in order to approximate a scenario of limited CMO exposure to the RSP equipment and training (which can occur six months prior to the mission). In each study, participants were asked to perform two respiratory distress scenarios using one of the cue card designs to simulate resuscitation (using a mannequin along with the hardware). Procedure completion time, errors, and subjective ratings were recorded. The last iteration of the cue card featured a schematic of the RSP, colors, borders, and simplification of the flow of information. The time to complete the RSP procedure was reduced by approximately three minutes with the new design. In an emergency situation, three minutes significantly increases the probability of saving a life. In addition, participants showed the highest preference for this design. The results of the studies and the new design were presented to a focus group of astronauts, flight surgeons, medical trainers, and procedures personnel. The final cue card was presented to a medical control board and approved for flight. The revised RSP cue card is currently onboard ISS.
Fast in-memory elastic full-waveform inversion using consumer-grade GPUs
NASA Astrophysics Data System (ADS)
Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge
2017-04-01
Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.
Gatollari, Hajere J; Colello, Anna; Eisenberg, Bonnie; Brissette, Ian; Luna, Jorge; Elkind, Mitchell S V; Willey, Joshua Z
2017-01-01
Although designated stroke centers (DSCs) improve the quality of care and clinical outcomes for ischemic stroke patients, less is known about the benefits of DSCs for patients with intracerebral hemorrhage (ICH) and subarachnoid hemorrhage (SAH). Compared to non-DSCs, hospitals with the DSC status have lower in-hospital mortality rates for hemorrhagic stroke patients. We believed these effects would sustain over a period of time after adjusting for hospital-level characteristics, including hospital size, urban location, and teaching status. We evaluated ICH (International Classification of Diseases, Ninth Revision; ICD-9: 431) and SAH (ICD-9: 430) hospitalizations documented in the 2008-2012 New York State Department of Health Statewide Planning and Research Cooperative System inpatient sample database. Generalized estimating equation logistic regression was used to evaluate the association between DSC status and in-hospital mortality. We calculated ORs and 95% CIs adjusted for clustering of patients within facilities, other hospital characteristics, and individual level characteristics. Planned secondary analyses explored other hospital characteristics associated with in-hospital mortality. In 6,352 ICH and 3,369 SAH patients in the study sample, in-hospital mortality was higher among those with ICH compared to SAH (23.7 vs. 18.5%). Unadjusted analyses revealed that DSC status was related with reduced mortality for both ICH (OR 0.7, 95% CI 0.5-0.8) and SAH patients (OR 0.4, 95% CI 0.3-0.7). DSC remained a significant predictor of lower in-hospital mortality for SAH patients (OR 0.6, 95% CI 0.3-0.9) but not for ICH patients (OR 0.8, 95% CI 0.6-1.0) after adjusting for patient demographic characteristics, comorbidities, hospital size, teaching status and location. Admission to a DSC was independently associated with reduced in-hospital mortality for SAH patients but not for those with ICH. Other patient and hospital characteristics may explain the benefits of DSC status on outcomes after ICH. For conditions with clear treatments such as ischemic stroke and SAH, being treated in a DSC improves outcomes, but this trend was not observed in those with strokes, in those who did not have clear treatment guidelines. Identifying hospital-level factors associated with ICH and SAH represents a means to identify and improve gaps in stroke systems of care. © 2016 S. Karger AG, Basel.
Blanc-Durand, Paul; Van Der Gucht, Axel; Schaefer, Niklaus; Itti, Emmanuel; Prior, John O
2018-01-01
Amino-acids positron emission tomography (PET) is increasingly used in the diagnostic workup of patients with gliomas, including differential diagnosis, evaluation of tumor extension, treatment planning and follow-up. Recently, progresses of computer vision and machine learning have been translated for medical imaging. Aim was to demonstrate the feasibility of an automated 18F-fluoro-ethyl-tyrosine (18F-FET) PET lesion detection and segmentation relying on a full 3D U-Net Convolutional Neural Network (CNN). All dynamic 18F-FET PET brain image volumes were temporally realigned to the first dynamic acquisition, coregistered and spatially normalized onto the Montreal Neurological Institute template. Ground truth segmentations were obtained using manual delineation and thresholding (1.3 x background). The volumetric CNN was implemented based on a modified Keras implementation of a U-Net library with 3 layers for the encoding and decoding paths. Dice similarity coefficient (DSC) was used as an accuracy measure of segmentation. Thirty-seven patients were included (26 [70%] in the training set and 11 [30%] in the validation set). All 11 lesions were accurately detected with no false positive, resulting in a sensitivity and a specificity for the detection at the tumor level of 100%. After 150 epochs, DSC reached 0.7924 in the training set and 0.7911 in the validation set. After morphological dilatation and fixed thresholding of the predicted U-Net mask a substantial improvement of the DSC to 0.8231 (+ 4.1%) was noted. At the voxel level, this segmentation led to a 0.88 sensitivity [95% CI, 87.1 to, 88.2%] a 0.99 specificity [99.9 to 99.9%], a 0.78 positive predictive value: [76.9 to 78.3%], and a 0.99 negative predictive value [99.9 to 99.9%]. With relatively high performance, it was proposed the first full 3D automated procedure for segmentation of 18F-FET PET brain images of patients with different gliomas using a U-Net CNN architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ikushima, K; Arimura, H; Jin, Z
Purpose: In radiation treatment planning, delineation of gross tumor volume (GTV) is very important, because the GTVs affect the accuracies of radiation therapy procedure. To assist radiation oncologists in the delineation of GTV regions while treatment planning for lung cancer, we have proposed a machine-learning-based delineation framework of GTV regions of solid and ground glass opacity (GGO) lung tumors following by optimum contour selection (OCS) method. Methods: Our basic idea was to feed voxel-based image features around GTV contours determined by radiation oncologists into a machine learning classifier in the training step, after which the classifier produced the degree ofmore » GTV for each voxel in the testing step. Ten data sets of planning CT and PET/CT images were selected for this study. The support vector machine (SVM), which learned voxel-based features which include voxel value and magnitudes of image gradient vector that obtained from each voxel in the planning CT and PET/CT images, extracted initial GTV regions. The final GTV regions were determined using the OCS method that was able to select a global optimum object contour based on multiple active delineations with a level set method around the GTV. To evaluate the results of proposed framework for ten cases (solid:6, GGO:4), we used the three-dimensional Dice similarity coefficient (DSC), which denoted the degree of region similarity between the GTVs delineated by radiation oncologists and the proposed framework. Results: The proposed method achieved an average three-dimensional DSC of 0.81 for ten lung cancer patients, while a standardized uptake value-based method segmented GTV regions with the DSC of 0.43. The average DSCs for solid and GGO were 0.84 and 0.76, respectively, obtained by the proposed framework. Conclusion: The proposed framework with the support vector machine may be useful for assisting radiation oncologists in delineating solid and GGO lung tumors.« less
Sinha, Michael S; Freifeld, Clark C; Brownstein, John S; Donneyong, Macarius M; Rausch, Paula; Lappin, Brian M; Zhou, Esther H; Dal Pan, Gerald J; Pawar, Ajinkya M; Hwang, Thomas J; Avorn, Jerry; Kesselheim, Aaron S
2018-01-05
The Food and Drug Administration (FDA) issues drug safety communications (DSCs) to health care professionals, patients, and the public when safety issues emerge related to FDA-approved drug products. These safety messages are disseminated through social media to ensure broad uptake. The objective of this study was to assess the social media dissemination of 2 DSCs released in 2013 for the sleep aid zolpidem. We used the MedWatcher Social program and the DataSift historic query tool to aggregate Twitter and Facebook posts from October 1, 2012 through August 31, 2013, a period beginning approximately 3 months before the first DSC and ending 3 months after the second. Posts were categorized as (1) junk, (2) mention, and (3) adverse event (AE) based on a score between -0.2 (completely unrelated) to 1 (perfectly related). We also looked at Google Trends data and Wikipedia edits for the same time period. Google Trends search volume is scaled on a range of 0 to 100 and includes "Related queries" during the relevant time periods. An interrupted time series (ITS) analysis assessed the impact of DSCs on the counts of posts with specific mention of zolpidem-containing products. Chow tests for known structural breaks were conducted on data from Twitter, Facebook, and Google Trends. Finally, Wikipedia edits were pulled from the website's editorial history, which lists all revisions to a given page and the editor's identity. In total, 174,286 Twitter posts and 59,641 Facebook posts met entry criteria. Of those, 16.63% (28,989/174,286) of Twitter posts and 25.91% (15,453/59,641) of Facebook posts were labeled as junk and excluded. AEs and mentions represented 9.21% (16,051/174,286) and 74.16% (129,246/174,286) of Twitter posts and 5.11% (3,050/59,641) and 68.98% (41,138/59,641) of Facebook posts, respectively. Total daily counts of posts about zolpidem-containing products increased on Twitter and Facebook on the day of the first DSC; Google searches increased on the week of the first DSC. ITS analyses demonstrated variability but pointed to an increase in interest around the first DSC. Chow tests were significant (P<.0001) for both DSCs on Facebook and Twitter, but only the first DSC on Google Trends. Wikipedia edits occurred soon after each DSC release, citing news articles rather than the DSC itself and presenting content that needed subsequent revisions for accuracy. Social media offers challenges and opportunities for dissemination of the DSC messages. The FDA could consider strategies for more actively disseminating DSC safety information through social media platforms, particularly when announcements require updating. The FDA may also benefit from directly contributing content to websites like Wikipedia that are frequently accessed for drug-related information. ©Michael S Sinha, Clark C Freifeld, John S Brownstein, Macarius M Donneyong, Paula Rausch, Brian M Lappin, Esther H Zhou, Gerald J Dal Pan, Ajinkya M Pawar, Thomas J Hwang, Jerry Avorn, Aaron S Kesselheim. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 05.01.2018.
NASA Astrophysics Data System (ADS)
Entler, S.; Duran, I.; Kocan, M.; Vayakis, G.
2017-07-01
Three vacuum vessel sectors in ITER will be instrumented by the outer vessel steady-state magnetic field sensors. Each sensor unit features a pair of metallic Hall sensors with a sensing layer made of bismuth to measure tangential and normal components of the local magnetic field. The influence of temperature and magnetic field on the Hall coefficient was tested for the temperature range from 25 to 250 oC and the magnetic field range from 0 to 0.5 T. A fit of the Hall coefficient normalized temperature function independent of magnetic field was found, and a model of the Hall coefficient functional dependence at a wide range of temperature and magnetic field was built with the purpose to simplify the calibration procedure.
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
NASA Technical Reports Server (NTRS)
Benton, E. R.
1986-01-01
A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Patrick J.
2016-10-05
The code is used to provide an unfolded/adjusted energy-dependent fission reactor neutron spectrum based upon an input trial spectrum and a set of measured activities. This is part of a neutron environment characterization that supports doing testing in a given reactor environment. An iterative perturbation method is used to obtain a "best fit" neutron flux spectrum for a given input set of infinitely dilute foil activities. The calculational procedure consists of the selection of a trial flux spectrum to serve as the initial approximation to the solution, and subsequent iteration to a form acceptable as an appropriate solution. The solutionmore » is specified either as time-integrated flux (fluence) for a pulsed environment or as a flux for a steady-state neutron environment.« less
Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.
Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A
2007-01-01
CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.
NASA Technical Reports Server (NTRS)
Chang, S. C.
1986-01-01
An algorithm for solving a large class of two- and three-dimensional nonseparable elliptic partial differential equations (PDE's) is developed and tested. It uses a modified D'Yakanov-Gunn iterative procedure in which the relaxation factor is grid-point dependent. It is easy to implement and applicable to a variety of boundary conditions. It is also computationally efficient, as indicated by the results of numerical comparisons with other established methods. Furthermore, the current algorithm has the advantage of possessing two important properties which the traditional iterative methods lack; that is: (1) the convergence rate is relatively insensitive to grid-cell size and aspect ratio, and (2) the convergence rate can be easily estimated by using the coefficient of the PDE being solved.
NASA Astrophysics Data System (ADS)
Titeux, Isabelle; Li, Yuming M.; Debray, Karl; Guo, Ying-Qiao
2004-11-01
This Note deals with an efficient algorithm to carry out the plastic integration and compute the stresses due to large strains for materials satisfying the Hill's anisotropic yield criterion. The classical algorithm of plastic integration such as 'Return Mapping Method' is largely used for nonlinear analyses of structures and numerical simulations of forming processes, but it requires an iterative schema and may have convergence problems. A new direct algorithm based on a scalar method is developed which allows us to directly obtain the plastic multiplier without an iteration procedure; thus the computation time is largely reduced and the numerical problems are avoided. To cite this article: I. Titeux et al., C. R. Mecanique 332 (2004).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürsoy, Doğa; Hong, Young P.; He, Kuan
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
A triangular thin shell finite element: Nonlinear analysis. [structural analysis
NASA Technical Reports Server (NTRS)
Thomas, G. R.; Gallagher, R. H.
1975-01-01
Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.
Analysis of aircraft tires via semianalytic finite elements
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.
1990-01-01
A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.
Numerical methods for the design of gradient-index optical coatings.
Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana
2012-12-01
We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.
NASA Astrophysics Data System (ADS)
Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.
2015-12-01
Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.
NASA Astrophysics Data System (ADS)
Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.
2017-04-01
ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Vertical Compliance Trends in KBOS OPD Arrival Redesign
NASA Technical Reports Server (NTRS)
Stewart, Michael Jeffrey; Matthews, Bryan L.; Feary, Michael S.
2016-01-01
This report is a high-level summary of vertical compliance trends and overall rates of Area Navigation Optimized Profile Descent (RNAV OPD) utilization for Boston Logan International Airport. Specifically, we investigated trends from three RNAV OPDs and the subsequent redesigned iterations of those procedures: OOSHN3 to OOSHN4, ROBUC1 to ROBUC2, and QUABN3 to JFUND1.
Training Effectiveness and Cost Iterative Technique (TECIT). Volume 2. Cost Effectiveness Analysis
1988-07-01
Moving Tank in a Field Exercise A The task cluster identified as tank commander’s station/tank gunnery and the sub-task of firing an M250 grenade launcher...Firing Procedures, Task Number 171-126-1028. I OBJECTIVE: Given an Ml tank with crew, loaded M250 I grenade launcher, the commander’s station powered up
The Stokes problem for the ellipsoid using ellipsoidal kernels
NASA Technical Reports Server (NTRS)
Zhu, Z.
1981-01-01
A brief review of Stokes' problem for the ellipsoid as a reference surface is given. Another solution of the problem using an ellipsoidal kernel, which represents an iterative form of Stokes' integral, is suggested with a relative error of the order of the flattening. On studying of Rapp's method in detail the procedures of improving its convergence are discussed.
Least Squares Computations in Science and Engineering
1994-02-01
iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory
On some Aitken-like acceleration of the Schwarz method
NASA Astrophysics Data System (ADS)
Garbey, M.; Tromeur-Dervout, D.
2002-12-01
In this paper we present a family of domain decomposition based on Aitken-like acceleration of the Schwarz method seen as an iterative procedure with a linear rate of convergence. We first present the so-called Aitken-Schwarz procedure for linear differential operators. The solver can be a direct solver when applied to the Helmholtz problem with five-point finite difference scheme on regular grids. We then introduce the Steffensen-Schwarz variant which is an iterative domain decomposition solver that can be applied to linear and nonlinear problems. We show that these solvers have reasonable numerical efficiency compared to classical fast solvers for the Poisson problem or multigrids for more general linear and nonlinear elliptic problems. However, the salient feature of our method is that our algorithm has high tolerance to slow network in the context of distributed parallel computing and is attractive, generally speaking, to use with computer architecture for which performance is limited by the memory bandwidth rather than the flop performance of the CPU. This is nowadays the case for most parallel. computer using the RISC processor architecture. We will illustrate this highly desirable property of our algorithm with large-scale computing experiments.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Thermal and dynamic mechanical properties of hydroxypropyl cellulose films
Timothy G. Rials; Wolfgang G. Glasser
1988-01-01
Differential scanning calorimetry (DSC) and dynamic mechanical thermal analysis (DMTA) were used to characterize the morphology of slovent cast hydroxypropyl cellulose (HPC) films. DSC results were indicative of a semicrystalline material with a melt of 220°C and a glass transition at 19°C (T1), although an additional event was suggested by a...
ERIC Educational Resources Information Center
Roach, Mary A.; Barratt, Marguerite Stevenson; Miller, Jon F.; Leavitt, Lewis A.
1998-01-01
Compared mothers' play with infants with Down syndrome (DSC) and typically developing children (TDC) matched for mental or chronological age. Found that TDC mothers exhibited more object demonstrations with their developmentally younger children, who showed less object play. DSC mothers were more directive and supportive than mothers of younger…
Among the Few at Deep Springs College: Assessing a Seven-Decade Experiment in Liberal Education.
ERIC Educational Resources Information Center
Newell, L. Jackson
1982-01-01
Describes the origins and characteristics of Deep Springs College (DSC), which since 1917 has teamed liberal arts instruction with the physical labor of running a cattle ranch. Uses alumni survey responses to assess the long-term effects of attending DSC. Examines paradoxes inherent in the school and its future prospects. (DMM)
47 CFR 80.1087 - Ship radio equipment-Sea area A1.
Code of Federal Regulations, 2010 CFR
2010-10-01
... which the ship is normally navigated, operating either: (1) On VHF using DSC; or (2) Through the polar...; or (4) On HF using DSC; or (5) Through the INMARSAT geostationary satellite service if within... communication. (b) The VHF radio installation, required by § 80.1085(a)(1), must also be capable of transmitting...
47 CFR 80.1087 - Ship radio equipment-Sea area A1.
Code of Federal Regulations, 2011 CFR
2011-10-01
... which the ship is normally navigated, operating either: (1) On VHF using DSC; or (2) Through the polar...; or (4) On HF using DSC; or (5) Through the INMARSAT geostationary satellite service if within... communication. (b) The VHF radio installation, required by § 80.1085(a)(1), must also be capable of transmitting...
NASA Astrophysics Data System (ADS)
Tosolin, A.; Souček, P.; Beneš, O.; Vigier, J.-F.; Luzzi, L.; Konings, R. J. M.
2018-05-01
PuF3 was synthetized by hydro-fluorination of PuO2 and subsequent reduction of the product by hydrogenation. The obtained PuF3 was analysed by X-Ray Diffraction (XRD) and found phase-pure. High purity was also confirmed by the melting point analysis using Differential Scanning Calorimetry (DSC). PuF3 was then used for thermodynamic assessment of the PuF3-LiF system. Phase equilibrium points and enthalpy of fusion of the eutectic composition were measured by DSC. XRD analyses of selected samples after DSC measurement confirm that after solidification from the liquid, the system returns to a mixture of LiF and PuF3.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-04-10
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.
Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong
2015-03-01
This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-01-01
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost. PMID:28394306
NASA Astrophysics Data System (ADS)
Egger, Jan; Nimsky, Christopher
2016-03-01
Due to the aging population, spinal diseases get more and more common nowadays; e.g., lifetime risk of osteoporotic fracture is 40% for white women and 13% for white men in the United States. Thus the numbers of surgical spinal procedures are also increasing with the aging population and precise diagnosis plays a vital role in reducing complication and recurrence of symptoms. Spinal imaging of vertebral column is a tedious process subjected to interpretation errors. In this contribution, we aim to reduce time and error for vertebral interpretation by applying and studying the GrowCut - algorithm for boundary segmentation between vertebral body compacta and surrounding structures. GrowCut is a competitive region growing algorithm using cellular automata. For our study, vertebral T2-weighted Magnetic Resonance Imaging (MRI) scans were first manually outlined by neurosurgeons. Then, the vertebral bodies were segmented in the medical images by a GrowCut-trained physician using the semi-automated GrowCut-algorithm. Afterwards, results of both segmentation processes were compared using the Dice Similarity Coefficient (DSC) and the Hausdorff Distance (HD) which yielded to a DSC of 82.99+/-5.03% and a HD of 18.91+/-7.2 voxel, respectively. In addition, the times have been measured during the manual and the GrowCut segmentations, showing that a GrowCutsegmentation - with an average time of less than six minutes (5.77+/-0.73) - is significantly shorter than a pure manual outlining.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
A self-adapting system for the automated detection of inter-ictal epileptiform discharges.
Lodder, Shaun S; van Putten, Michel J A M
2014-01-01
Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
Stevenson, Fiona A; Gibson, William; Pelletier, Caroline; Chrysikou, Vasiliki; Park, Sophie
2015-05-08
UK-based research conducted within a healthcare setting generally requires approval from the National Research Ethics Service. Research ethics committees are required to assess a vast range of proposals, differing in both their topic and methodology. We argue the methodological benchmarks with which research ethics committees are generally familiar and which form the basis of assessments of quality do not fit with the aims and objectives of many forms of qualitative inquiry and their more iterative goals of describing social processes/mechanisms and making visible the complexities of social practices. We review current debates in the literature related to ethical review and social research, and illustrate the importance of re-visiting the notion of ethics in healthcare research. We present an analysis of two contrasting paradigms of ethics. We argue that the first of these is characteristic of the ways that NHS ethics boards currently tend to operate, and the second is an alternative paradigm, that we have labelled the 'iterative' paradigm, which draws explicitly on methodological issues in qualitative research to produce an alternative vision of ethics. We suggest that there is an urgent need to re-think the ways that ethical issues are conceptualised in NHS ethical procedures. In particular, we argue that embedded in the current paradigm is a restricted notion of 'quality', which frames how ethics are developed and worked through. Specific, pre-defined outcome measures are generally seen as the traditional marker of quality, which means that research questions that focus on processes rather than on 'outcomes' may be regarded as problematic. We show that the alternative 'iterative' paradigm offers a useful starting point for moving beyond these limited views. We conclude that a 'one size fits all' standardisation of ethical procedures and approach to ethical review acts against the production of knowledge about healthcare and dramatically restricts what can be known about the social practices and conditions of healthcare. Our central argument is that assessment of ethical implications is important, but that the current paradigm does not facilitate an adequate understanding of the very issues it aims to invigilate.
Optimal design of gene knockout experiments for gene regulatory network inference
Ud-Dean, S. M. Minhaz; Gunawan, Rudiyanto
2016-01-01
Motivation: We addressed the problem of inferring gene regulatory network (GRN) from gene expression data of knockout (KO) experiments. This inference is known to be underdetermined and the GRN is not identifiable from data. Past studies have shown that suboptimal design of experiments (DOE) contributes significantly to the identifiability issue of biological networks, including GRNs. However, optimizing DOE has received much less attention than developing methods for GRN inference. Results: We developed REDuction of UnCertain Edges (REDUCE) algorithm for finding the optimal gene KO experiment for inferring directed graphs (digraphs) of GRNs. REDUCE employed ensemble inference to define uncertain gene interactions that could not be verified by prior data. The optimal experiment corresponds to the maximum number of uncertain interactions that could be verified by the resulting data. For this purpose, we introduced the concept of edge separatoid which gave a list of nodes (genes) that upon their removal would allow the verification of a particular gene interaction. Finally, we proposed a procedure that iterates over performing KO experiments, ensemble update and optimal DOE. The case studies including the inference of Escherichia coli GRN and DREAM 4 100-gene GRNs, demonstrated the efficacy of the iterative GRN inference. In comparison to systematic KOs, REDUCE could provide much higher information return per gene KO experiment and consequently more accurate GRN estimates. Conclusions: REDUCE represents an enabling tool for tackling the underdetermined GRN inference. Along with advances in gene deletion and automation technology, the iterative procedure brings an efficient and fully automated GRN inference closer to reality. Availability and implementation: MATLAB and Python scripts of REDUCE are available on www.cabsel.ethz.ch/tools/REDUCE. Contact: rudi.gunawan@chem.ethz.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26568633
Huang, Chun-Jung; Webb, Heather E; Beasley, Kathleen N; McAlpine, David A; Tangsilsat, Supatchara E; Acevedo, Edmund O
2014-03-01
Pentraxin 3 (PTX3) has been recently identified as a biomarker of vascular inflammation in predicting cardiovascular events. The purpose of this study was to examine the effect of cardiorespiratory fitness on plasma PTX3 and cortisol responses to stress, utilizing a dual-stress model. Fourteen male subjects were classified into high-fit (HF) and low-fit (LF) groups and completed 2 counterbalanced experimental conditions. The exercise-alone condition (EAC) consisted of cycling at 60% maximal oxygen uptake for 37 min, while the dual-stress condition (DSC) included 20 min of a mental stress while cycling for 37 min. Plasma PTX3 revealed significant increases over time with a significant elevation at 37 min in both HF and LF groups in response to EAC and DSC. No difference in plasma PTX3 levels was observed between EAC and DSC. In addition, plasma cortisol revealed a significant condition by time interaction with greater levels during DSC at 37 min, whereas cardiorespiratory fitness level did not reveal different plasma cortisol responses in either the EAC or DSC. Aerobic exercise induces plasma PTX3 release, while additional acute mental stress, in a dual-stress condition, does not exacerbate or further modulate the PTX3 response. Furthermore, cardiorespiratory fitness may not affect the stress reactivity of plasma PTX3 to physical and combined physical and psychological stressors. Finally, the exacerbated cortisol responses to combined stress may provide the potential link to biological pathways that explain changes in physiological homeostasis that may be associated with an increase in the risk of cardiovascular disease.
Hirai, T; Kitajima, M; Nakamura, H; Okuda, T; Sasao, A; Shigematsu, Y; Utsunomiya, D; Oda, S; Uetani, H; Morioka, M; Yamashita, Y
2011-12-01
QUASAR is a particular application of the ASL method and facilitates the user-independent quantification of brain perfusion. The purpose of this study was to assess the intermodality agreement of TBF measurements obtained with ASL and DSC MR imaging and the inter- and intraobserver reproducibility of glioma TBF measurements acquired by ASL at 3T. Two observers independently measured TBF in 24 patients with histologically proved glioma. ASL MR imaging with QUASAR and DSC MR imaging were performed on 3T scanners. The observers placed 5 regions of interest in the solid tumor on rCBF maps derived from ASL and DSC MR images and 1 region of interest in the contralateral brain and recorded the measured values. Maximum and average sTBF values were calculated. Intermodality and intra- and interobsever agreement were determined by using 95% Bland-Altman limits of agreement and ICCs. The intermodality agreement for maximum sTBF was good to excellent on DSC and ASL images; ICCs ranged from 0.718 to 0.884. The 95% limits of agreement ranged from 59.2% to 65.4% of the mean. ICCs for intra- and interobserver agreement for maximum sTBF ranged from 0.843 to 0.850 and from 0.626 to 0.665, respectively. The reproducibility of maximum sTBF measurements obtained by methods was similar. In the evaluation of sTBF in gliomas, ASL with QUASAR at 3T yielded measurements and reproducibility similar to those of DSC perfusion MR imaging.
Effects of particle reinforcement and ECAP on the precipitation kinetics of an Al-Cu alloy
NASA Astrophysics Data System (ADS)
Härtel, M.; Wagner, S.; Frint, P.; F-X Wagner, M.
2014-08-01
The precipitation kinetics of Al-Cu alloys have recently been revisited in various studies, considering either the effect of severe plastic deformation (e.g., by equal-channel angular pressing - ECAP), or the effect of particle reinforcements. However, it is not clear how these effects interact when ECAP is performed on particle-reinforced alloys. In this study, we analyze how a combination of particle reinforcement and ECAP affects precipitation kinetics. After solution annealing, an AA2017 alloy (initial state: base material without particle reinforcement); AA2017 + 10 vol.-% Al2O3; and AA2017 + 10 vol.-% SiC were deformed in one pass in a 120° ECAP tool at a temperature of 140°C. Systematic differential scanning calorimetry (DSC) measurements of each condition were carried out. TEM specimens were prepared out of samples from additional DSC measurements, where the samples were immediately quenched in liquid nitrogen after reaching carefully selected temperatures. TEM analysis was performed to characterize the morphology of the different types of precipitates, and to directly relate microstructural information to the endo- and exothermic peaks in our DSC data. Our results show that both ECAP and particle reinforcement are associated with a shift of exothermic precipitation peaks towards lower temperatures. This effect is even more pronounced when ECAP and particle reinforcement are combined. The DSC data agrees well with our TEM observations of nucleation and morphology of different precipitates, indicating that DSC measurements are an appropriate tool for the analysis of how severe plastic deformation and particle reinforcement affect precipitation kinetics in Al-Cu alloys.
Khoram, Nafiseh; Zayane, Chadia; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2016-03-15
The calibration of the hemodynamic model that describes changes in blood flow and blood oxygenation during brain activation is a crucial step for successfully monitoring and possibly predicting brain activity. This in turn has the potential to provide diagnosis and treatment of brain diseases in early stages. We propose an efficient numerical procedure for calibrating the hemodynamic model using some fMRI measurements. The proposed solution methodology is a regularized iterative method equipped with a Kalman filtering-type procedure. The Newton component of the proposed method addresses the nonlinear aspect of the problem. The regularization feature is used to ensure the stability of the algorithm. The Kalman filter procedure is incorporated here to address the noise in the data. Numerical results obtained with synthetic data as well as with real fMRI measurements are presented to illustrate the accuracy, robustness to the noise, and the cost-effectiveness of the proposed method. We present numerical results that clearly demonstrate that the proposed method outperforms the Cubature Kalman Filter (CKF), one of the most prominent existing numerical methods. We have designed an iterative numerical technique, called the TNM-CKF algorithm, for calibrating the mathematical model that describes the single-event related brain response when fMRI measurements are given. The method appears to be highly accurate and effective in reconstructing the BOLD signal even when the measurements are tainted with high noise level (as high as 30%). Published by Elsevier B.V.
7 CFR 1744.30 - Automatic lien accommodations.
Code of Federal Regulations, 2011 CFR
2011-01-01
... supplemental mortgage is a valid and binding instrument enforceable in accordance with its terms, and recorded...: (1) The borrower has achieved a TIER of not less than 1.5 and a DSC of not less than 1.25 for each of... not less than 2.5 and a DSC of not less than 1.5 for each of the borrower's two fiscal years...
7 CFR 1744.30 - Automatic lien accommodations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... supplemental mortgage is a valid and binding instrument enforceable in accordance with its terms, and recorded...: (1) The borrower has achieved a TIER of not less than 1.5 and a DSC of not less than 1.25 for each of... not less than 2.5 and a DSC of not less than 1.5 for each of the borrower's two fiscal years...
2013-09-01
Figure 17. Reliable acoustic paths from a deep source to shallow receivers (From Urick 1983... Urick 1983). ..................................................................28 Figure 19. Computer generated ray diagram of the DSC for a source...near the axis. Reflected rays are omitted (From Urick 1983). .........................................29 Figure 20. Worldwide DSC axis depths in
Code of Federal Regulations, 2010 CFR
2010-10-01
... System: Alerting: 406.0-406.1 EPIRBs 406.0-406.1 MHz (Earth-to-space).1544-1545 MHz (space-to-Earth). INMARSAT-E EPIRBs 12 1626.5-1645.5 MHz (Earth-to-space). INMARSAT Ship Earth Stations capable of voice and/or direct printing 1626.5-1645.5 MHz (Earth-to-space). VHF DSC Ch. 70 156.525 MHz. 1 MF/HF DSC 2 2187...
Estimation of Temperature Range for Cryo Cutting of Frozen Mackerel using DSC
NASA Astrophysics Data System (ADS)
Okamoto, Kiyoshi; Hagura, Yoshio; Suzuki, Kanichi
Frozen mackerel flesh was subjected to measurement of its fracture stress (bending energy) in a low temperature range. The optimum conditions for low temperature cutting, "cryo cutting," were estimated from the results of enthalpy changes measured by a differential scanning calorimeter (DSC). There were two enthalpy changes for gross transition on the DSC chart for mackerel, one was at -63°C to -77°C and the other at -96°C to -112°C. Thus we estimated that mackerel was able to cut by bending below -63°C and that there would be a great decrease in bending energy occurring at around -77°C and -112°C. In testing, there were indeed two great decreases of bending energy for the test pieces of mackerel that had been frozen at -40°C, one was at -70°C to -90°C and the other was at -100°C to -120°C. Therefore, the test pieces of mackerel could be cut by bending at -70°C. The results showed that the DSC measurement of mackerel flesh gave a good estimation of the appropriate cutting temperature of mackerel.
Detection of cocrystal formation based on binary phase diagrams using thermal analysis.
Yamashita, Hiroyuki; Hirakura, Yutaka; Yuda, Masamichi; Teramura, Toshio; Terada, Katsuhide
2013-01-01
Although a number of studies have reported that cocrystals can form by heating a physical mixture of two components, details surrounding heat-induced cocrystal formation remain unclear. Here, we attempted to clarify the thermal behavior of a physical mixture and cocrystal formation in reference to a binary phase diagram. Physical mixtures prepared using an agate mortar were heated at rates of 2, 5, 10, and 30 °C/min using differential scanning calorimetry (DSC). Some mixtures were further analyzed using X-ray DSC and polarization microscopy. When a physical mixture consisting of two components which was capable of cocrystal formation was heated using DSC, an exothermic peak associated with cocrystal formation was detected immediately after an endothermic peak. In some combinations, several endothermic peaks were detected and associated with metastable eutectic melting, eutectic melting, and cocrystal melting. In contrast, when a physical mixture of two components which is incapable of cocrystal formation was heated using DSC, only a single endothermic peak associated with eutectic melting was detected. These experimental observations demonstrated how the thermal events were attributed to phase transitions occurring in a binary mixture and clarified the relationship between exothermic peaks and cocrystal formation.
NASA Astrophysics Data System (ADS)
Ye, Liming; Yang, Guixia; Van Ranst, Eric; Tang, Huajun
2013-03-01
A generalized, structural, time series modeling framework was developed to analyze the monthly records of absolute surface temperature, one of the most important environmental parameters, using a deterministicstochastic combined (DSC) approach. Although the development of the framework was based on the characterization of the variation patterns of a global dataset, the methodology could be applied to any monthly absolute temperature record. Deterministic processes were used to characterize the variation patterns of the global trend and the cyclic oscillations of the temperature signal, involving polynomial functions and the Fourier method, respectively, while stochastic processes were employed to account for any remaining patterns in the temperature signal, involving seasonal autoregressive integrated moving average (SARIMA) models. A prediction of the monthly global surface temperature during the second decade of the 21st century using the DSC model shows that the global temperature will likely continue to rise at twice the average rate of the past 150 years. The evaluation of prediction accuracy shows that DSC models perform systematically well against selected models of other authors, suggesting that DSC models, when coupled with other ecoenvironmental models, can be used as a supplemental tool for short-term (˜10-year) environmental planning and decision making.
Garcia-Perez, Manuel; Adams, Thomas T; Goodrum, John W; Das, K C; Geller, Daniel P
2010-08-01
This paper describes the use of Differential Scanning Calorimetry (DSC) to evaluate the impact of varying mix ratios of bio-oil (pyrolysis oil) and bio-diesel on the oxidation stability and on some cold flow properties of resulting blends. The bio-oils employed were produced from the semi-continuous Auger pyrolysis of pine pellets and the batch pyrolysis of pine chips. The bio-diesel studied was obtained from poultry fat. The conditions used to prepare the bio-oil/bio-diesel blends as well as some of the fuel properties of these blends are reported. The experimental results suggest that the addition of bio-oil improves the oxidation stability of the resulting blends and modifies the crystallization behavior of unsaturated compounds. Upon the addition of bio-oil an increase in the oxidation onset temperature, as determined by DSC, was observed. The increase in bio-diesel oxidation stability is likely to be due to the presence of hindered phenols abundant in bio-oils. A relatively small reduction in DSC characteristic temperatures which are associated with cold flow properties was also observed but can likely be explained by a dilution effect. (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Siqi; Wang, Li; Feng, Xuning; He, Xiangming
2018-02-01
Safety issue is very important for the lithium ion battery used in electric vehicle or other applications. This paper probes the heat sources in the thermal runaway processes of lithium ion batteries composed of different chemistries using accelerating rate calorimetry (ARC) and differential scanning calorimetry (DSC). The adiabatic thermal runaway features for the 4 types of commercial lithium ion batteries are tested using ARC, whereas the reaction characteristics of the component materials, including the cathode, the anode and the separator, inside the 4 types of batteries are measured using DSC. The peaks and valleys of the critical component reactions measured by DSC can match the fluctuations in the temperature rise rate measured by ARC, therefore the relevance between the DSC curves and the ARC curves is utilized to probe the heat source in the thermal runaway process and reveal the thermal runaway mechanisms. The results and analysis indicate that internal short circuit is not the only way to thermal runaway, but can lead to extra electrical heat, which is comparable with the heat released by chemical reactions. The analytical approach of the thermal runaway mechanisms in this paper can guide the safety design of commercial lithium ion batteries.
Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Buskirk, Caleb Griffith
2017-06-14
Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less
1989-06-23
Iterations .......................... 86 3.2 Comparison between MACH and POLAR ......................... 90 3.3 Flow Chart for VSTS Algorithm...The most recent changes are: a) development of the VSTS (velocity space topology search) algorithm for calculating particle densities b) extension...with simple analytic models. The largest modification of the MACH code was the implementation of the VSTS procedure, which constituted a complete
The external kink mode in diverted tokamaks
NASA Astrophysics Data System (ADS)
Turnbull, A. D.; Hanson, J. M.; Turco, F.; Ferraro, N. M.; Lanctot, M. J.; Lao, L. L.; Strait, E. J.; Piovesan, P.; Martin, P.
2016-06-01
> . The resistive kink behaves much like the ideal kink with predominantly kink or interchange parity and no real sign of a tearing component. However, the growth rates scale with a fractional power of the resistivity near the surface. The results have a direct bearing on the conventional edge cutoff procedures used in most ideal MHD codes, as well as implications for ITER and for future reactor options.
Plane wave scattering by bow-tie posts
NASA Astrophysics Data System (ADS)
Lech, Rafal; Mazur, Jerzy
2004-04-01
The theory of scattering in free space by a novel structure of a two-dimensional dielectric-metallic post is developed with the use of a combination of a modified iterative scattering procedure and an orthogonal expansion method. The far scattered field patterns for open structures are derived. The rotation of the post affects its scattered field characteristic, which permits to make adjustments in characteristic of the posts arrays.
Non-equilibrium price theories
NASA Astrophysics Data System (ADS)
Helbing, Dirk; Kern, Daniel
2000-11-01
We propose two theories for the formation of stock prices under the condition that the number of available stocks is fixed. Both theories consider the balance equations for cash and several kinds of stocks. They also take into account interest rates, dividends, and transaction costs. The proposed theories have the advantage that they do not require iterative procedures to determine the price, which would be inefficient for simulations with many agents.
TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization
2016-11-28
objective 9 4.6 On The Recoverable Robust Traveling Salesman Problem . . . . . 11 4.7 A Bicriteria Approach to Robust Optimization...be found. 4.6 On The Recoverable Robust Traveling Salesman Problem The traveling salesman problem (TSP) is a well-known combinatorial optimiza- tion...procedure for the robust traveling salesman problem . While this iterative algorithms results in an optimal solution to the robust TSP, computation
Thermal stress analysis of reusable surface insulation for shuttle
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1974-01-01
An iterative procedure for accurately determining tile stresses associated with static mechanical and thermally induced internal loads is presented. The necessary conditions for convergence of the method are derived. An user-oriented computer program based upon the present method of analysis was developed. The program is capable of analyzing multi-tiled panels and determining the associated stresses. Typical numerical results from this computer program are presented.
Canopy, Erin; Evans, Matt; Boehler, Margaret; Roberts, Nicole; Sanfey, Hilary; Mellinger, John
2015-10-01
Endoscopic retrograde cholangiopancreatography is a challenging procedure performed by surgeons and gastroenterologists. We employed cognitive task analysis to identify steps and decision points for this procedure. Standardized interviews were conducted with expert gastroenterologists (7) and surgeons (4) from 4 institutions. A procedural step and cognitive decision point protocol was created from audio-taped transcriptions and was refined by 5 additional surgeons. Conceptual elements, sequential actions, and decision points were iterated for 5 tasks: patient preparation, duodenal intubation, selective cannulation, imaging interpretation with related therapeutic intervention, and complication management. A total of 180 steps were identified. Gastroenterologists identified 34 steps not identified by surgeons, and surgeons identified 20 steps not identified by gastroenterologists. The findings suggest that for complex procedures performed by diverse practitioners, more experts may help delineate distinctive emphases differentiated by training background and type of practice. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1989-01-01
A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.
Bai, Jinbing; Swanson, Kristen M; Santacroce, Sheila J
2018-01-01
Parent interactions with their child can influence the child's pain and distress during painful procedures. Reliable and valid interaction analysis systems (IASs) are valuable tools for capturing these interactions. The extent to which IASs are used in observational research of parent-child interactions is unknown in pediatric populations. To identify and evaluate studies that focus on assessing psychometric properties of initial iterations/publications of observational coding systems of parent-child interactions during painful procedures. To identify and evaluate studies that focus on assessing psychometric properties of initial iterations/publications of observational coding systems of parent-child interactions during painful procedures. Computerized databases searched included PubMed, CINAHL, PsycINFO, Health and Psychosocial Instruments, and Scopus. Timeframes covered from inception of the database to January 2017. Studies were included if they reported use or psychometrics of parent-child IASs. First assessment was whether the parent-child IASs were theory-based; next, using the Society of Pediatric Psychology Assessment Task Force criteria IASs were assigned to one of three categories: well-established, approaching well-established, or promising. A total of 795 studies were identified through computerized searches. Eighteen studies were ultimately determined to be eligible for inclusion in the review and 17 parent-child IASs were identified from these 18 studies. Among the 17 coding systems, 14 were suitable for use in children age 3 years or more; two were theory-based; and 11 included verbal and nonverbal parent behaviors that promoted either child coping or child distress. Four IASs were assessed as well-established; seven approached well-established; and six were promising. Findings indicate a need for the development of theory-based parent-child IASs that consider both verbal and nonverbal parent behaviors during painful procedures. Findings also suggest a need for further testing of those parent-child IASs deemed "approaching well-established" or "promising". © 2017 World Institute of Pain.
NASA Astrophysics Data System (ADS)
Schirrer, A.; Westermayer, C.; Hemedi, M.; Kozek, M.
2013-12-01
This paper shows control design results, performance, and limitations of robust lateral control law designs based on the DGK-iteration mixed-μ-synthesis procedure for a large, flexible blended wing body (BWB) passenger aircraft. The aircraft dynamics is preshaped by a low-complexity inner loop control law providing stabilization, basic response shaping, and flexible mode damping. The μ controllers are designed to further improve vibration damping of the main flexible modes by exploiting the structure of the arising significant parameter-dependent plant variations. This is achieved by utilizing parameterized Linear Fractional Representations (LFR) of the aircraft rigid and flexible dynamics. Designs with various levels of LFR complexity are carried out and discussed, showing the achieved performance improvement over the initial controller and their robustness and complexity properties.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
Acciarri, R.; Adams, C.; Asaadi, J.; ...
2017-03-09
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.
We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less
Contact stresses in pin-loaded orthotropic plates
NASA Technical Reports Server (NTRS)
Hyer, M. W.; Klang, E. C.
1984-01-01
The effects of pin elasticity, friction, and clearance on the stresses near the hole in a pin-loaded orthotropic plate are described. The problem is modeled as a contact elasticity problem using complex variable theory, the pin and the plate being two elastic bodies interacting through contact. This modeling is in contrast to previous works which assumed that the pin is rigid or that it exerts a known cosinusoidal radial traction on the hole boundary. Neither of these approaches explicitly involves a pin. A collocation procedure and iteration were used to obtain numerical results for a variety of plate and pin elastic properties and various levels of friction and clearance. Collocation was used to enforce the boundary and iteration was used to find the contact and no-slip regions on the boundary. Details of the numerical scheme are discussed.
Adaptive Discrete Hypergraph Matching.
Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao
2018-02-01
This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.
A method for the dynamic and thermal stress analysis of space shuttle surface insulation
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1975-01-01
The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; Asaadi, J.
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Solid state characterization of dehydroepiandrosterone.
Chang, L C; Caira, M R; Guillory, J K
1995-10-01
Three polymorphs (forms I-III), a monohydrate (form S2), and three new solvates [4:1 hydrate (form S1), monohydrate (form S3), and methanol half-solvate (form S4)] were isolated and characterized by X-ray powder diffractometry (XRPD), IR spectroscopy, differential scanning calorimetry (DSC), hot stage microscopy, solution calorimetry, and their dissolution rates. A new polymorph, designated as form V, melting at 146.5-148 degrees C, was observed by hot stage microscopy. Our results indicate that only forms I and S4 exhibit reproducible DSC thermograms. Five of the isolated modifications undergo phase transformation on heating, and their DSC thermograms are not reproducible. Interpretation of DSC thermograms was facilitated by use of hot stage microscopy. The identification of each modification is based on XRPD patterns (except forms S3 and S4, for which the XRPD patterns are indistinguishable) and IR spectra. In the IR spectra, a significant difference was observed in the OH stretching region of all seven modifications. In a purity determination study, 5% of a contaminant modification in binary mixtures of several modifications could be detected by use of XRPD. To obtain a better understanding of the thermodynamic properties of these modifications, a series of increasing heating rates and different pan types were used in DSC. According to Burger's rule, forms I-III are monotropic polymorphs with decreasing stability in the order form I > form II > form III. The melting onsets and heats of fusion for forms I-III are 149.1 degrees C, 25.5 kJ/mol; 140.8 degrees C, 24.6 kJ/mol; and 137.8 degrees C, 24.0 kJ/mol, respectively. For form III the heat of fusion was calculated from heat of solution and DSC data. In the case of form S1 the melting point, 127.2 degrees C, was obtained by DSC using a hermetically sealed pan. The relative stabilities of the six modifications stored under high humidity conditions were predicted to be, on the basis of the heat of solution and thermal analysis data, from S2 > form S3 > form S1 > form I > form II > form III. However, the results of the dissolution rate determination were inconsistent with the heat of solution data. The stable form I shows a higher initial dissolution rate than the metastable form II and unstable form III. All modifications were converted into the stable monohydrate, form S2, during the dissolution study, suggesting that the moisture level in solid formulations should be carefully controlled.
Deuterium results at the negative ion source test facility ELISE
NASA Astrophysics Data System (ADS)
Kraus, W.; Wünderlich, D.; Fantz, U.; Heinemann, B.; Bonomo, F.; Riedl, R.
2018-05-01
The ITER neutral beam system will be equipped with large radio frequency (RF) driven negative ion sources, with a cross section of 0.9 m × 1.9 m, which have to deliver extracted D- ion beams of 57 A at 1 MeV for 1 h. On the extraction from a large ion source experiment test facility, a source of half of this size is being operational since 2013. The goal of this experiment is to demonstrate a high operational reliability and to achieve the extracted current densities and beam properties required for ITER. Technical improvements of the source design and the RF system were necessary to provide reliable operation in steady state with an RF power of up to 300 kW. While in short pulses the required D- current density has almost been reached, the performance in long pulses is determined in particular in Deuterium by inhomogeneous and unstable currents of co-extracted electrons. By application of refined caesium evaporation and distribution procedures, and reduction and symmetrization of the electron currents, considerable progress has been made and up to 190 A/m2 D-, corresponding to 66% of the value required for ITER, have been extracted for 45 min.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
Development of parallel algorithms for electrical power management in space applications
NASA Technical Reports Server (NTRS)
Berry, Frederick C.
1989-01-01
The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.
Non-linear eigensolver-based alternative to traditional SCF methods
NASA Astrophysics Data System (ADS)
Gavin, Brendan; Polizzi, Eric
2013-03-01
The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.
Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber
NASA Astrophysics Data System (ADS)
Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit
2007-10-01
This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.
NASA Technical Reports Server (NTRS)
Walker, H. F.
1976-01-01
Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.
Squeeze film dampers with oil hole feed
NASA Technical Reports Server (NTRS)
Chen, P. Y. P.; Hahn, E. J.
1994-01-01
To improve the damping capability of squeeze film dampers, oil hole feed rather than circumferential groove feed is a practical proposition. However, circular orbit response can no longer be assumed, significantly complicating the design analysis. This paper details a feasible transient solution procedure for such dampers, with particular emphasis on the additional difficulties due to the introduction of oil holes. It is shown how a cosine power series solution may be utilized to evaluate the oil hole pressure contributions, enabling appropriate tabular data to be compiled. The solution procedure is shown to be applicable even in the presence of flow restrictors, albeit at the expense of introducing an iteration at each time step. Though not of primary interest, the procedure is also applicable to dynamically loaded journal bearings with oil hole feed.
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Berkovich, Inbal; Mavila, Sudheendran; Iliashevsky, Olga; Kozuch, Sebastian
2016-01-01
High molecular weight polybutadienes and rhodium complexes were used to produce single chain organometallic nanoparticles. Irradiation of high cis-polybutadiene in the presence of a photosensitizer isomerised the double bonds to produce differing cis/trans ratios within the polymer. Notably, a higher cis percentage of carbon–carbon double bonds within the polymer structure led to faster binding of metal ions, as well as their faster removal by competing phosphine ligands. The experimental results were supported and rationalized by DFT computations. PMID:28936327
Static telescope aberration measurement using lucky imaging techniques
NASA Astrophysics Data System (ADS)
López-Marrero, Marcos; Rodríguez-Ramos, Luis Fernando; Marichal-Hernández, José Gil; Rodríguez-Ramos, José Manuel
2012-07-01
A procedure has been developed to compute static aberrations once the telescope PSF has been measured with the lucky imaging technique, using a nearby star close to the object of interest as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the pupil using the Gerchberg-Saxton algorithm and then converted to the appropriate actuation information for a deformable mirror having low actuator number but large stroke capability. The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing direction and without the need of a wavefront sensor.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
Code of Federal Regulations, 2012 CFR
2012-01-01
... financial ratios: (i) TIER of 1.25; (ii) Operating TIER of 1.1; (iii) DSC of 1.25; and Operating DSC of 1.1... Coverage Ratios Requirements. Section 5.5. Depreciation Rates. Section 5.6. Property Maintenance. Section 5.7. Financial Books. Section 5.8. Rights of Inspection. Section 5.9. Area Coverage. Section 5.10...
Code of Federal Regulations, 2013 CFR
2013-01-01
... financial ratios: (i) TIER of 1.25; (ii) Operating TIER of 1.1; (iii) DSC of 1.25; and Operating DSC of 1.1... Coverage Ratios Requirements. Section 5.5. Depreciation Rates. Section 5.6. Property Maintenance. Section 5.7. Financial Books. Section 5.8. Rights of Inspection. Section 5.9. Area Coverage. Section 5.10...
Code of Federal Regulations, 2011 CFR
2011-01-01
... financial ratios: (i) TIER of 1.25; (ii) Operating TIER of 1.1; (iii) DSC of 1.25; and Operating DSC of 1.1... Coverage Ratios Requirements. Section 5.5. Depreciation Rates. Section 5.6. Property Maintenance. Section 5.7. Financial Books. Section 5.8. Rights of Inspection. Section 5.9. Area Coverage. Section 5.10...
Code of Federal Regulations, 2014 CFR
2014-01-01
... financial ratios: (i) TIER of 1.25; (ii) Operating TIER of 1.1; (iii) DSC of 1.25; and Operating DSC of 1.1... Coverage Ratios Requirements. Section 5.5. Depreciation Rates. Section 5.6. Property Maintenance. Section 5.7. Financial Books. Section 5.8. Rights of Inspection. Section 5.9. Area Coverage. Section 5.10...
NASA Astrophysics Data System (ADS)
Bilyeu, Bryan
Kinetic equation parameters for the curing reaction of a commercial glass fiber reinforced high performance epoxy prepreg composed of the tetrafunctional epoxy tetraglycidyl 4,4-diaminodiphenyl methane (TGDDM), the tetrafunctional amine curing agent 4,4'-diaminodiphenylsulfone (DDS) and an ionic initiator/accelerator, are determined by various thermal analysis techniques and the results compared. The reaction is monitored by heat generated determined by differential scanning calorimetry (DSC) and by high speed DSC when the reaction rate is high. The changes in physical properties indicating increasing conversion are followed by shifts in glass transition temperature determined by DSC, temperature-modulated DSC (TMDSC), step scan DSC and high speed DSC, thermomechanical (TMA) and dynamic mechanical (DMA) analysis and thermally stimulated depolarization (TSD). Changes in viscosity, also indicative of degree of conversion, are monitored by DMA. Thermal stability as a function of degree of cure is monitored by thermogravimetric analysis (TGA). The parameters of the general kinetic equations, including activation energy and rate constant, are explained and used to compare results of various techniques. The utilities of the kinetic descriptions are demonstrated in the construction of a useful time-temperature-transformation (TTT) diagram and a continuous heating transformation (CHT) diagram for rapid determination of processing parameters in the processing of prepregs. Shrinkage due to both resin consolidation and fiber rearrangement is measured as the linear expansion of the piston on a quartz dilatometry cell using TMA. The shrinkage of prepregs was determined to depend on the curing temperature, pressure applied and the fiber orientation. Chemical modification of an epoxy was done by mixing a fluorinated aromatic amine (aniline) with a standard aliphatic amine as a curing agent for a commercial Diglycidylether of Bisphenol-A (DGEBA) epoxy. The resulting cured network was tested for wear resistance using tribological techniques. Of the six anilines, 3-fluoroaniline and 4-fluoroaniline were determined to have lower wear than the unmodified epoxy, while the others showed much higher wear rates.
NASA Astrophysics Data System (ADS)
Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire
2012-04-01
Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to describe the characteristics of a procedure in science laboratories, in an educational context. As a starting point, this paper proposes a model in the form of a hierarchical task diagram that gives the general structure of any procedure. This model allows both the analysis of existing procedures and the design of a new inquiry-based approach. The obtained characteristics are further organized into criteria that can help both teachers and students assess a procedure during and after its writing. These results are obtained through two different sets of data. First, the characteristics of procedures are established by analysing laboratory manuals. This allows the organization and type of information in procedures to be defined. This analysis reveals that students are seldom asked to write a full procedure, but sometimes have to specify tasks within a procedure. Secondly, iterative interviews are undertaken with teachers. This leads to the list of criteria to evaluate the procedure.
NASA Astrophysics Data System (ADS)
Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.
2016-12-01
National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.
Hu, Guanying; Yuan, Xing; Zhang, Sanyin; Wang, Ruru; Yang, Miao; Wu, Chunjie; Wu, Zhigang; Ke, Xiao
2015-02-01
Danshu capsule (DSC) is a medicinal compound in traditional Chinese medicine (TCM). It is commonly used for the treatment of acute & chronic cholecystitis as well as choleithiasis. To study its choleretic effect, healthy rats were randomly divided into DSC high (DSCH, 900mg/kg), medium (DSCM, 450mg/kg), and low (DSCL, 225mg/kg) group, Xiaoyan Lidan tablet (XYLDT, 750mg/kg), and saline group. The bile was collected for 1h after 20-minute stabilization as the base level, and at 1h, 2h, 3h, and 4h after drug administration, respectively. Bile volume, total cholesterol, and total bile acid were measured at each time point. The results revealed that DSC significantly stimulated bile secretion, decreased total cholesterol level and increased total bile acid level. Therefore, it had choleretic effects. To identify the active components contributing to its choleretic effects, five major constituents which are menthol (39.33mg/kg), menthone (18.02mg/kg), isomenthone (8.18mg/kg), pluegone (3.31mg/kg), and limonene (4.39mg/kg) were tested on our rat model. The results showed that menthol and limonene could promote bile secretion when compared to DSC treatment (p > 0.05); Menthol, menthol and limonene could significantly decrease total cholesterol level (p<0.05 or p<0.01) as well as increase total bile acid level (p<0.05 or p<0.01); Isomenthone, as a isomer of menthone, existed slightly choleretic effects; Pluegone had no obvious role in bile acid efflux. These findings indicated that the choleretic effects of DSC may be attributed mainly to its three major constituents: menthol, menthone and limonene. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.
2018-02-01
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Grimmond, Terry; Reiner, Sandra
2012-06-01
Hospitals are striving to reduce their greenhouse gas (GHG) emissions. Targeting supply chain points and replacing disposable with reusable items are among recommendations to achieve this. Annually, US hospitals use 35 million disposable (DSC) or reusable sharps containers (RSC) generating GHG in their manufacture, use, and disposal. Using a life cycle assessment we assessed the global warming potential (GWP) of both systems at a large US hospital which replaced DSC with RSC. GHG emissions (CO(2), CH(4), N(2)O) were calculated in metric tons of CO(2) equivalents (MTCO(2)eq). Primary energy input data was used wherever possible and region-specific conversions used to calculate the GWP of each activity. Unit process GHGs were collated into manufacture, transport, washing, and treatment and disposal. The DSC were not recycled nor had recycled content. Chemotherapy DSC were used in both systems. Emission totals were workload-normalized per 100 occupied beds-yr and rate ratio analyzed using Fisher's test with P ≤0.05 and 95% confidence level. With RSC, the hospital reduced its annual GWP by 127 MTCO(2)eq (-83.5%) and diverted 30.9 tons of plastic and 5.0 tons of cardboard from landfill. Using RSC reduced the number of containers manufactured from 34,396 DSC annually to 1844 RSC in year one only. The study indicates sharps containment GWP in US hospitals totals 100,000 MTCO(2)eq and if RSC were used nationally the figure could fall by 64,000 MTCO(2)eq which, whilst only a fraction of total hospital GWP, is a positive, sustainable step.
Lin, Yu; Xing, Zhen; She, Dejun; Yang, Xiefeng; Zheng, Yingyan; Xiao, Zebin; Wang, Xingfu; Cao, Dairong
2017-06-01
Currently, isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion are proven diagnostic biomarkers for both grade II and III oligodendrogliomas (ODs). Non-invasive diffusion-weighted imaging (DWI), susceptibility-weighted imaging (SWI), and dynamic susceptibility contrast perfusion-weighted imaging (DSC-PWI) are widely used to provide physiological information (cellularity, hemorrhage, calcifications, and angiogenesis) of neoplastic histology and tumor grade. However, it is unclear whether DWI, SWI, and DSC-PWI are able to stratify grades of IDH-mutant and 1p/19q co-deleted ODs. We retrospectively reviewed the conventional MRI (cMRI), DWI, SWI, and DSC-PWI obtained on 33 patients with IDH-mutated and 1p/19q co-deleted ODs. Features of cMRI, normalized ADC (nADC), intratumoral susceptibility signals (ITSSs), normalized maxim CBV (nCBV), and normalized maximum CBF (nCBF) were compared between low-grade ODs (LGOs) and high-grade ODs (HGOs). Receiver operating characteristic curve and logistic regression were applied to determine diagnostic performances. HGOs tended to present with prominent edema and enhancement. nADC, ITSSs, nCBV, and nCBF were significantly different between groups (all P < 0.05). The combination of SWI and DSC-PWI for grading resulted in sensitivity and specificity of 100.00 and 93.33%, respectively. IDH-mutant and 1p/19q co-deleted ODs can be stratified by grades using cMRI and advanced magnetic resonance imaging techniques including DWI, SWI, and DSC-PWI. Combined ITSSs with nCBV appear to be a promising option for grading molecularly defined ODs in clinical practice.
Boxerman, Jerrold L; Ellingson, Benjamin M; Jeyapalan, Suriya; Elinzano, Heinrich; Harris, Robert J; Rogg, Jeffrey M; Pope, Whitney B; Safran, Howard
2017-06-01
For patients with high-grade glioma on clinical trials it is important to accurately assess time of disease progression. However, differentiation between pseudoprogression (PsP) and progressive disease (PD) is unreliable with standard magnetic resonance imaging (MRI) techniques. Dynamic susceptibility contrast perfusion MRI (DSC-MRI) can measure relative cerebral blood volume (rCBV) and may help distinguish PsP from PD. A subset of patients with high-grade glioma on a phase II clinical trial with temozolomide, paclitaxel poliglumex, and concurrent radiation were assessed. Nine patients (3 grade III, 6 grade IV), with a total of 19 enhancing lesions demonstrating progressive enhancement (≥25% increase from nadir) on postchemoradiation conventional contrast-enhanced MRI, had serial DSC-MRI. Mean leakage-corrected rCBV within enhancing lesions was computed for all postchemoradiation time points. Of the 19 progressively enhancing lesions, 10 were classified as PsP and 9 as PD by biopsy/surgery or serial enhancement patterns during interval follow-up MRI. Mean rCBV at initial progressive enhancement did not differ significantly between PsP and PD (2.35 vs. 2.17; P=0.67). However, change in rCBV at first subsequent follow-up (-0.84 vs. 0.84; P=0.001) and the overall linear trend in rCBV after initial progressive enhancement (negative vs. positive slope; P=0.04) differed significantly between PsP and PD. Longitudinal trends in rCBV may be more useful than absolute rCBV in distinguishing PsP from PD in chemoradiation-treated high-grade gliomas with DSC-MRI. Further studies of DSC-MRI in high-grade glioma as a potential technique for distinguishing PsP from PD are indicated.
Dye-sensitized solar cells and complexes between pyridines and iodines. A NMR, IR and DFT study.
Hansen, Poul Erik; Nguyen, Phuong Tuyet; Krake, Jacob; Spanget-Larsen, Jens; Lund, Torben
2012-12-01
Interactions between triiodide (I(3)(-)) and 4-tert-butylpyridine (4TBP) as postulated in dye-sensitized solar cells (DSC) are investigated by means of (13)C NMR and IR spectroscopy supported by DFT calculations. The charge transfer (CT) complex 4TBP·I(2) and potential salts such as (4TBP)(2)I(+), I(3)(-) were synthesized and characterized by IR and (13)C NMR spectroscopy. However, mixing (butyl)(4)N(+), I(3)(-) and 4TBP at concentrations comparable to those of the DSC solar cell did not lead to any reaction. Neither CT complexes nor cationic species like (4TBP)(2)I(+) were observed, judging from the (13)C NMR spectroscopic evidence. This questions the previously proposed formation of (4TBP)(2)I(+) in DSC cells. Copyright © 2012 Elsevier B.V. All rights reserved.
Thermal behaviour and microanalysis of coal subbituminus
NASA Astrophysics Data System (ADS)
Heriyanti; Prendika, W.; Ashyar, R.; Sutrisno
2018-04-01
Differential scanning calorimetry (DSC) and X-ray powder diffraction (XRD) is used to study the thermal behaviour of sub-bituminous coal. The DSC experiment was performed in air atmosphere up to 125 °C at a heating rate of 25 °C min1. The DSC curve showed that the distinct transitional stages in the coal samples studied. Thermal heating temperature intervals, peak and dissociation energy of the coal samples were also determined. The XRD analysis was used to evaluate the diffraction pattern and crystal structure of the compounds in the coal sample at various temperatures (25-350 °C). The XRD analysis of various temperatures obtained compounds from the coal sample, dominated by quartz (SiO2) and corundum (Al2O3). The increase in temperature of the thermal treatment showed a better crystal formation.
Speranza, V.; Sorrentino, A.; De Santis, F.; Pantani, R.
2014-01-01
The first stages of the crystallization of polycaprolactone (PCL) were studied using several techniques. The crystallization exotherms measured by differential scanning calorimetry (DSC) were analyzed and compared with results obtained by polarized optical microscopy (POM), rheology, and atomic force microscope (AFM). The experimental results suggest a strong influence of the observation scale. In particular, the AFM, even if limited on time scale, appears to be the most sensitive technique to detect the first stages of crystallization. On the contrary, at least in the case analysed in this work, rheology appears to be the least sensitive technique. DSC and POM provide closer results. This suggests that the definition of induction time in the polymer crystallization is a vague concept that, in any case, requires the definition of the technique used for its characterization. PMID:24523644
Speranza, V; Sorrentino, A; De Santis, F; Pantani, R
2014-01-01
The first stages of the crystallization of polycaprolactone (PCL) were studied using several techniques. The crystallization exotherms measured by differential scanning calorimetry (DSC) were analyzed and compared with results obtained by polarized optical microscopy (POM), rheology, and atomic force microscope (AFM). The experimental results suggest a strong influence of the observation scale. In particular, the AFM, even if limited on time scale, appears to be the most sensitive technique to detect the first stages of crystallization. On the contrary, at least in the case analysed in this work, rheology appears to be the least sensitive technique. DSC and POM provide closer results. This suggests that the definition of induction time in the polymer crystallization is a vague concept that, in any case, requires the definition of the technique used for its characterization.
NASA Technical Reports Server (NTRS)
Musselwhite, D. S.; Boynton, W. V.; Ming, Douglas W.; Quadlander, G.; Kerry, K. E.; Bode, R. C.; Bailey, S. H.; Ward, M. G.; Pathare, A. V.; Lorenz, R. D.
2000-01-01
Differential Scanning Calorimetry (DSC) combined with evolved gas analysis (EGA) is a well developed technique for the analysis of a wide variety of sample types with broad application in material and soil sciences. However, the use of the technique for samples under conditions of pressure and temperature as found on other planets is one of current C development and cutting edge research. The Thermal Evolved Gas Analyzer (MGA), which was designed, built and tested at the University of Arizona's Lunar and Planetary Lab (LPL), utilizes DSC/EGA. TEGA, which was sent to Mars on the ill-fated Mars Polar Lander, was to be the first application of DSC/EGA on the surface of Mars as well as the first direct measurement of the volatile-bearing mineralogy in martian soil.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
Synthesis of Hydrophobic, Crosslinkable Resins.
1985-12-01
product by methanol precipitation the majority of the first oligomer was L-"- lost. 4.14 DIFFERENTIAL SCANNING CALORIMETRY. The DSC trace of a typical...polymer from the DSC traces obtained to dcte. Preliminary studies using an automated torsional pendulum indicate that the Tg of the crosslinked polymer is...enabling water to be used in the purification steps. The diethyl phosphonates are readily prepared by heating triethyl phosphite with the chloromethyl
Stavrou, E; Tsiantos, C; Tsopouridou, R D; Kripotou, S; Kontos, A G; Raptis, C; Capoen, B; Bouazaoui, M; Turrell, S; Khatir, S
2010-05-19
Raman scattering and differential scanning calorimetry (DSC) measurements have been carried out on four mixed tellurium-zinc oxide (TeO(2))(1 - x)(ZnO)(x) (x = 0.1, 0.2, 0.3, 0.4) glasses under variable temperature, with particular attention being given to the respective glass transition region. From the DSC measurements, the glass transition temperature T(g) has been determined for each glass, showing a monotonous decrease of T(g) with increasing ZnO content. The Raman study is focused on the low-frequency band of the glasses, the so-called boson peak (BP), whose frequency undergoes an abrupt decrease at a temperature T(d) very close to the respective T(g) values obtained by DSC. These results show that the BP is highly sensitive to dynamical effects over the glass transition and provides a means for an equally reliable (to DSC) determination of T(g) in tellurite glasses and other network glasses. The discontinuous temperature dependence of the BP frequency at the glass transition, along with the absence of such a behaviour by the high-frequency Raman bands (due to local atomic vibrations), indicates that marked changes of the medium range order (MRO) occur at T(g) and confirms the correlation between the BP and the MRO of glasses.
NASA Astrophysics Data System (ADS)
Galkin, Sergei A.; Bogatu, I. N.; Svidzinski, V. A.
2015-11-01
A novel project to develop Disruption Prediction And Simulation Suite (DPASS) of comprehensive computational tools to predict, model, and analyze disruption events in tokamaks has been recently started at FAR-TECH Inc. DPASS will eventually address the following aspects of the disruption problem: MHD, plasma edge dynamics, plasma-wall interaction, generation and losses of runaway electrons. DPASS uses the 3-D Disruption Simulation Code (DSC-3D) as a core tool and will have a modular structure. DSC is a one fluid non-linear, time-dependent 3D MHD code to simulate dynamics of tokamak plasma surrounded by pure vacuum B-field in the real geometry of a conducting tokamak vessel. DSC utilizes the adaptive meshless technique with adaptation to the moving plasma boundary, with accurate magnetic flux conservation and resolution of the plasma surface current. DSC has also an option to neglect the plasma inertia to eliminate fast magnetosonic scale. This option can be turned on/off as needed. During Phase I of the project, two modules will be developed: the computational module for modeling the massive gas injection and main plasma respond; and the module for nanoparticle plasma jet injection as an innovative disruption mitigation scheme. We will report on this development progress. Work is supported by the US DOE SBIR grant # DE-SC0013727.
NASA Astrophysics Data System (ADS)
Quarles, C. C.; Gochberg, D. F.; Gore, J. C.; Yankeelov, T. E.
2009-10-01
Dynamic susceptibility contrast (DSC) MRI methods rely on compartmentalization of the contrast agent such that a susceptibility gradient can be induced between the contrast-containing compartment and adjacent spaces, such as between intravascular and extravascular spaces. When there is a disruption of the blood-brain barrier, as is frequently the case with brain tumors, a contrast agent leaks out of the vasculature, resulting in additional T1, T2 and T*2 relaxation effects in the extravascular space, thereby affecting the signal intensity time course and reducing the reliability of the computed hemodynamic parameters. In this study, a theoretical model describing these dynamic intra- and extravascular T1, T2 and T*2 relaxation interactions is proposed. The applicability of using the proposed model to investigate the influence of relevant MRI pulse sequences (e.g. echo time, flip angle), and physical (e.g. susceptibility calibration factors, pre-contrast relaxation rates) and physiological parameters (e.g. permeability, blood flow, compartmental volume fractions) on DSC-MRI signal time curves is demonstrated. Such a model could yield important insights into the biophysical basis of contrast-agent-extravasastion-induced effects on measured DSC-MRI signals and provide a means to investigate pulse sequence optimization and appropriate data analysis methods for the extraction of physiologically relevant imaging metrics.
Toward a Psychology of Social Change: A Typology of Social Change
de la Sablonnière, Roxane
2017-01-01
Millions of people worldwide are affected by dramatic social change (DSC). While sociological theory aims to understand its precipitants, the psychological consequences remain poorly understood. A large-scale literature review pointed to the desperate need for a typology of social change that might guide theory and research toward a better understanding of the psychology of social change. Over 5,000 abstracts from peer-reviewed articles were assessed from sociological and psychological publications. Based on stringent inclusion criteria, a final 325 articles were used to construct a novel, multi-level typology designed to conceptualize and categorize social change in terms of its psychological threat to psychological well-being. The typology of social change includes four social contexts: Stability, Inertia, Incremental Social Change and, finally, DSC. Four characteristics of DSC were further identified: the pace of social change, rupture to the social structure, rupture to the normative structure, and the level of threat to one's cultural identity. A theoretical model that links the characteristics of social change together and with the social contexts is also suggested. The typology of social change as well as our theoretical proposition may serve as a foundation for future investigations and increase our understanding of the psychologically adaptive mechanisms used in the wake of DSC. PMID:28400739
Castro-Rosas, Javier; Gómez-Aldapa, Carlos Alberto; Villagómez Ibarra, José Roberto; Santos-López, Eva María; Rangel-Vargas, Esmeralda
2017-10-16
Several reports have suggested that the viable but non-culturable (VBNC) state is a resistant form of bacterial cells that allows them to remain in a dormant form in the environment. Nevertheless, studies on the resistance of VBNC bacterial cells to ecological factors are limited, mainly because techniques that allow this type of evaluation are lacking. Differential scanning calorimetry (DSC) has been used to study the thermal resistance of culturable bacteria but has never been used to study VBNC cells. In this work, the heat resistance of Escherichia coli cells in the VBNC state was studied using the DSC technique. The VBNC state was induced in E. coli ATCC 25922 by suspending bacterial cells in artificial sea water, followed by storage at 3 ± 2°C for 110 days. Periodically, the behaviour of E. coli cells was monitored by plate counts, direct viable counts and DSC. The entire bacterial population entered the VBNC state after 110 days of storage. The results obtained with DSC suggest that the VBNC state does not confer thermal resistance to E. coli cells in the temperature range analysed here. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Toward a Psychology of Social Change: A Typology of Social Change.
de la Sablonnière, Roxane
2017-01-01
Millions of people worldwide are affected by dramatic social change (DSC). While sociological theory aims to understand its precipitants, the psychological consequences remain poorly understood. A large-scale literature review pointed to the desperate need for a typology of social change that might guide theory and research toward a better understanding of the psychology of social change. Over 5,000 abstracts from peer-reviewed articles were assessed from sociological and psychological publications. Based on stringent inclusion criteria, a final 325 articles were used to construct a novel, multi-level typology designed to conceptualize and categorize social change in terms of its psychological threat to psychological well-being. The typology of social change includes four social contexts: Stability, Inertia, Incremental Social Change and, finally, DSC. Four characteristics of DSC were further identified: the pace of social change, rupture to the social structure, rupture to the normative structure, and the level of threat to one's cultural identity. A theoretical model that links the characteristics of social change together and with the social contexts is also suggested. The typology of social change as well as our theoretical proposition may serve as a foundation for future investigations and increase our understanding of the psychologically adaptive mechanisms used in the wake of DSC.
Application Guide for Heat Recovery Incinerators.
1986-02-01
of the absorption cycle to vaporize the refrigerant, typically an aqueous ammonia . The refrigerant then follows the typical refrigeration cycle...this third level of iteration, the information gathered in level II should be updated if necessary and verified. Use the NCEL survey method (see...and quantity of the solid waste can be determined by applying procedures set forth in Appendix B. For level III, NCEL has developed a survey method
Tension Cutoff and Parameter Identification for the Viscoplastic Cap Model.
1983-04-01
computer program "VPDRVR" which employs a Crank-Nicolson time integration scheme and a Newton-Raphson iterative solution procedure. Numerical studies were...parameters was illustrated for triaxial stress and uniaxial strain loading for a well- studied sand material (McCormick Ranch Sand). Lastly, a finite element...viscoplastic tension-cutoff cri- terion and to establish parameter identification techniques with experimental data. Herein lies the impetus of this study
Learning to read aloud: A neural network approach using sparse distributed memory
NASA Technical Reports Server (NTRS)
Joglekar, Umesh Dwarkanath
1989-01-01
An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.
NASA Astrophysics Data System (ADS)
He, Jiangang; Franchini, Cesare
2017-11-01
In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter α by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization method and making use of the relation \
Unsteady Flow Simulation: A Numerical Challenge
2003-03-01
drive to convergence the numerical unsteady term. The time marching procedure is based on the approximate implicit Newton method for systems of non...computed through analytical derivatives of S. The linear system stemming from equation (3) is solved at each integration step by the same iterative method...significant reduction of memory usage, thanks to the reduced dimensions of the linear system matrix during the implicit marching of the solution. The
Finite element analysis of wrinkling membranes
NASA Technical Reports Server (NTRS)
Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.
1984-01-01
The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.
Complex wet-environments in electronic-structure calculations
NASA Astrophysics Data System (ADS)
Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.
Identification of different geologic units using fuzzy constrained resistivity tomography
NASA Astrophysics Data System (ADS)
Singh, Anand; Sharma, S. P.
2018-01-01
Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.
Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves
Xia, J.; Miller, R.D.; Park, C.B.
1999-01-01
The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.