NASA Astrophysics Data System (ADS)
H, L. SWAMI; C, DANANI; A, K. SHAW
2018-06-01
Activation analyses play a vital role in nuclear reactor design. Activation analyses, along with nuclear analyses, provide important information for nuclear safety and maintenance strategies. Activation analyses also help in the selection of materials for a nuclear reactor, by providing the radioactivity and dose rate levels after irradiation. This information is important to help define maintenance activity for different parts of the reactor, and to plan decommissioning and radioactive waste disposal strategies. The study of activation analyses of candidate structural materials for near-term fusion reactors or ITER is equally essential, due to the presence of a high-energy neutron environment which makes decisive demands on material selection. This study comprises two parts; in the first part the activation characteristics, in a fusion radiation environment, of several elements which are widely present in structural materials, are studied. It reveals that the presence of a few specific elements in a material can diminish its feasibility for use in the nuclear environment. The second part of the study concentrates on activation analyses of candidate structural materials for near-term fusion reactors and their comparison in fusion radiation conditions. The structural materials selected for this study, i.e. India-specific Reduced Activation Ferritic‑Martensitic steel (IN-RAFMS), P91-grade steel, stainless steel 316LN ITER-grade (SS-316LN-IG), stainless steel 316L and stainless steel 304, are candidates for use in ITER either in vessel components or test blanket systems. Tungsten is also included in this study because of its use for ITER plasma-facing components. The study is carried out using the reference parameters of the ITER fusion reactor. The activation characteristics of the materials are assessed considering the irradiation at an ITER equatorial port. The presence of elements like Nb, Mo, Co and Ta in a structural material enhance the activity level as well as the dose level, which has an impact on design considerations. IN-RAFMS was shown to be a more effective low-activation material than SS-316LN-IG.
NASA Astrophysics Data System (ADS)
1993-08-01
The Committee's evaluation of vanadium alloys as a structural material for fusion reactors was constrained by limited data and time. The design of the International Thermonuclear Experimental Reactor is still in the concept stage, so meaningful design requirements were not available. The data on the effect of environment and irradiation on vanadium alloys were sparse, and interpolation of these data were made to select the V-5Cr-5Ti alloy. With an aggressive, fully funded program it is possible to qualify a vanadium alloy as the principal structural material for the ITER blanket in the available 5 to 8-year window. However, the data base for V-5Cr-5Ti is limited and will require an extensive development and test program. Because of the chemical reactivity of vanadium the alloy will be less tolerant of system failures, accidents, and off-normal events than most other candidate blanket structural materials and will require more careful handling during fabrication of hardware. Because of the cost of the material more stringent requirements on processes, and minimal historical working experience, it will cost an order of magnitude to qualify a vanadium alloy for ITER blanket structures than other candidate materials. The use of vanadium is difficult and uncertain; therefore, other options should be explored more thoroughly before a final selection of vanadium is confirmed. The Committee views the risk as being too high to rely solely on vanadium alloys. In viewing the state and nature of the design of the ITER blanket as presented to the Committee, it is obvious that there is a need to move toward integrating fabrication, welding, and materials engineers into the ITER design team. If the vanadium alloy option is to be pursued, a large program needs to be started immediately. The commitment of funding and other resources needs to be firm and consistent with a realistic program plan.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
NASA Astrophysics Data System (ADS)
Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato
2009-06-01
As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.
NASA Astrophysics Data System (ADS)
Akiba, Masato; Matsui, Hideki; Takatsu, Hideyuki; Konishi, Satoshi
Technical issues regarding the fusion power plant that are required to be developed in the period of ITER construction and operation, both with ITER and with other facilities that complement ITER are described in this section. Three major fields are considered to be important in fusion technology. Section 4.1 summarizes blanket study, and ITER Test Blanket Module (TBM) development that focuses its effort on the first generation power blanket to be installed in DEMO. ITER will be equipped with 6 TBMs which are developed under each party's fusion program. In Japan, the solid breeder using water as a coolant is the primary candidate, and He-cooled pebble bed is the alternative. Other liquid options such as LiPb, Li or molten salt are developed by other parties' initiatives. The Test Blanket Working Group (TBWG) is coordinating these efforts. Japanese universities are investigating advanced concepts and fundamental crosscutting technologies. Section 4.2 introduces material development and particularly, the international irradiation facility, IFMIF. Reduced activation ferritic/martensitic steels are identified as promising candidates for the structural material of the first generation fusion blanket, while and vanadium alloy and SiC/SiC composite are pursued as advanced options. The IFMIF is currently planning the next phase of joint activity, EVEDA (Engineering Validation and Engineering Design Activity) that encompasses construction. Material studies together with the ITER TBM will provide essential technical information for development of the fusion power plant. Other technical issues to be addressed regarding the first generation fusion power plant are summarized in section 4.3. Development of components for ITER made remarkable progress for the major essential technology also necessary for future fusion plants, however many still need further improvements toward power plant. Such areas includes; the divertor, plasma heating/current drive, magnets, tritium, and remote handling. There remain many other technical issues for power plant which require integrated efforts.
Melt damage simulation of W-macrobrush and divertor gaps after multiple transient events in ITER
NASA Astrophysics Data System (ADS)
Bazylev, B. N.; Janeschitz, G.; Landman, I. S.; Loarte, A.; Pestchanyi, S. E.
2007-06-01
Tungsten in the form of macrobrush structure is foreseen as one of two candidate materials for the ITER divertor and dome. In ITER, even for moderate and weak ELMs when a thin shielding layer does not protect the armour surface from the dumped plasma, the main mechanisms of metallic target damage remain surface melting and melt motion erosion, which determines the lifetime of the plasma facing components. The melt erosion of W-macrobrush targets with different geometry of brush surface under the heat loads caused by weak ELMs is numerically investigated using the modified code MEMOS. The optimal angle of brush surface inclination that provides a minimum of surface roughness is estimated for given inclination angles of impacting plasma stream and given parameters of the macrobrush target. For multiple disruptions the damage of the dome gaps and the gaps between divertor cassettes caused by the radiation impact is estimated.
NASA Astrophysics Data System (ADS)
Smyth, R. T.; Ballance, C. P.; Ramsbottom, C. A.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.
2018-05-01
Neutral tungsten is the primary candidate as a wall material in the divertor region of the International Thermonuclear Experimental Reactor (ITER). The efficient operation of ITER depends heavily on precise atomic physics calculations for the determination of reliable erosion diagnostics, helping to characterize the influx of tungsten impurities into the core plasma. The following paper presents detailed calculations of the atomic structure of neutral tungsten using the multiconfigurational Dirac-Fock method, drawing comparisons with experimental measurements where available, and includes a critical assessment of existing atomic structure data. We investigate the electron-impact excitation of neutral tungsten using the Dirac R -matrix method, and by employing collisional-radiative models, we benchmark our results with recent Compact Toroidal Hybrid measurements. The resulting comparisons highlight alternative diagnostic lines to the widely used 400.88-nm line.
Technical Issues for the Fabrication of a CN-HCCB-TBM Based on RAFM Steel CLF-1
NASA Astrophysics Data System (ADS)
Wang, Pinghuai; Chen, Jiming; Fu, Haiying; Liu, Shi; Li, Xiongwei; Xu, Zengyu
2013-02-01
Reduced activation ferritic/martensitic steel (RAFM) is recognized as the primary candidate structural material for ITER's test blanket module (TBM). To provide a material and property database for the design and fabrication of the Chinese helium cooled ceramic breeding TBM (CN HCCB TBM), a type of RAFM steel named CLF-1 was developed and characterized at the Southwestern Institute of Physics (SWIP), China. In this paper, the R&D status of CLF-1 steel and the technical issues in using CLF-1 steel to manufacture CN HCCB TBM were reviewed, including the steel manufacture and different welding technologies. Several kinds of property data have been obtained for its application to the design of the ITER TBM.
Irradiation tests of ITER candidate Hall sensors using two types of neutron spectra.
Ďuran, I; Bolshakova, I; Viererbl, L; Sentkerestiová, J; Holyaka, R; Lahodová, Z; Bém, P
2010-10-01
We report on irradiation tests of InSb based Hall sensors at two irradiation facilities with two distinct types of neutron spectra. One was a fission reactor neutron spectrum with a significant presence of thermal neutrons, while another one was purely fast neutron field. Total neutron fluence of the order of 10(16) cm(-2) was accumulated in both cases, leading to significant drop of Hall sensor sensitivity in case of fission reactor spectrum, while stable performance was observed at purely fast neutron spectrum. This finding suggests that performance of this particular type of Hall sensors is governed dominantly by transmutation. Additionally, it further stresses the need to test ITER candidate Hall sensors under neutron flux with ITER relevant spectrum.
Accelerated iterative beam angle selection in IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan
2016-03-15
Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less
Accelerated iterative beam angle selection in IMRT.
Bangert, Mark; Unkelbach, Jan
2016-03-01
Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.
Behaviour of the ASDEX pressure gauge at high neutral gas pressure and applications for ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarabosio, A.; Haas, G.
2008-03-12
The ASDEX Pressure Gauge is, at present, the main candidate for in-vessel neutral pressure measurement in ITER. Although the APG output is found to saturate at around 15 Pa, below the ITER requirement of 20 Pa. We show, here, that with small modifications of the gauge geometry and potentials settings we can achieve satisfactory behaviour up to 30 Pa at 6 T.
NASA Astrophysics Data System (ADS)
Ozbasaran, Hakan
Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.
Knutson, Stacy T; Westwood, Brian M; Leuthaeuser, Janelle B; Turner, Brandon E; Nguyendac, Don; Shea, Gabrielle; Kumar, Kiran; Hayden, Julia D; Harper, Angela F; Brown, Shoshana D; Morris, John H; Ferrin, Thomas E; Babbitt, Patricia C; Fetrow, Jacquelyn S
2017-04-01
Protein function identification remains a significant problem. Solving this problem at the molecular functional level would allow mechanistic determinant identification-amino acids that distinguish details between functional families within a superfamily. Active site profiling was developed to identify mechanistic determinants. DASP and DASP2 were developed as tools to search sequence databases using active site profiling. Here, TuLIP (Two-Level Iterative clustering Process) is introduced as an iterative, divisive clustering process that utilizes active site profiling to separate structurally characterized superfamily members into functionally relevant clusters. Underlying TuLIP is the observation that functionally relevant families (curated by Structure-Function Linkage Database, SFLD) self-identify in DASP2 searches; clusters containing multiple functional families do not. Each TuLIP iteration produces candidate clusters, each evaluated to determine if it self-identifies using DASP2. If so, it is deemed a functionally relevant group. Divisive clustering continues until each structure is either a functionally relevant group member or a singlet. TuLIP is validated on enolase and glutathione transferase structures, superfamilies well-curated by SFLD. Correlation is strong; small numbers of structures prevent statistically significant analysis. TuLIP-identified enolase clusters are used in DASP2 GenBank searches to identify sequences sharing functional site features. Analysis shows a true positive rate of 96%, false negative rate of 4%, and maximum false positive rate of 4%. F-measure and performance analysis on the enolase search results and comparison to GEMMA and SCI-PHY demonstrate that TuLIP avoids the over-division problem of these methods. Mechanistic determinants for enolase families are evaluated and shown to correlate well with literature results. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
An iterative method for near-field Fresnel region polychromatic phase contrast imaging
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2017-07-01
We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.
Prospects for Advanced Tokamak Operation of ITER
NASA Astrophysics Data System (ADS)
Neilson, George H.
1996-11-01
Previous studies have identified steady-state (or "advanced") modes for ITER, based on reverse-shear profiles and significant bootstrap current. A typical example has 12 MA of plasma current, 1,500 MW of fusion power, and 100 MW of heating and current-drive power. The implementation of these and other steady-state operating scenarios in the ITER device is examined in order to identify key design modifications that can enhance the prospects for successfully achieving advanced tokamak operating modes in ITER compatible with a single null divertor design. In particular, we examine plasma configurations that can be achieved by the ITER poloidal field system with either a monolithic central solenoid (as in the ITER Interim Design), or an alternate "hybrid" central solenoid design which provides for greater flexibility in the plasma shape. The increased control capability and expanded operating space provided by the hybrid central solenoid allows operation at high triangularity (beneficial for improving divertor performance through control of edge-localized modes and for increasing beta limits), and will make it much easier for ITER operators to establish an optimum startup trajectory leading to a high-performance, steady-state scenario. Vertical position control is examined because plasmas made accessible by the hybrid central solenoid can be more elongated and/or less well coupled to the conducting structure. Control of vertical-displacements using the external PF coils remains feasible over much of the expanded operating space. Further work is required to define the full spectrum of axisymmetric plasma disturbances requiring active control In addition to active axisymmetric control, advanced tokamak modes in ITER may require active control of kink modes on the resistive time scale of the conducting structure. This might be accomplished in ITER through the use of active control coils external to the vacuum vessel which are actuated by magnetic sensors near the first wall. The enhanced shaping and positioning flexibility provides a range of options for reducing the ripple-induced losses of fast alpha particles--a major limitation on ITER steady-state modes. An alternate approach that we are pursuing in parallel is the inclusion of ferromagnetic inserts to reduce the toroidal field ripple within the plasma chamber. The inclusion of modest design changes such as the hybrid central solenoid, active control coils for kink modes, and ferromagnetic inserts for TF ripple reduction show can greatly increase the flexibility to accommodate advance tokamak operation in ITER. Increased flexibility is important because the optimum operating scenario for ITER cannot be predicted with certainty. While low-inductance, reverse shear modes appear attractive for steady-state operation, high-inductance, high-beta modes are also viable candidates, and it is important that ITER have the flexibility to explore both these, and other, operating regimes.
Knutson, Stacy T.; Westwood, Brian M.; Leuthaeuser, Janelle B.; Turner, Brandon E.; Nguyendac, Don; Shea, Gabrielle; Kumar, Kiran; Hayden, Julia D.; Harper, Angela F.; Brown, Shoshana D.; Morris, John H.; Ferrin, Thomas E.; Babbitt, Patricia C.
2017-01-01
Abstract Protein function identification remains a significant problem. Solving this problem at the molecular functional level would allow mechanistic determinant identification—amino acids that distinguish details between functional families within a superfamily. Active site profiling was developed to identify mechanistic determinants. DASP and DASP2 were developed as tools to search sequence databases using active site profiling. Here, TuLIP (Two‐Level Iterative clustering Process) is introduced as an iterative, divisive clustering process that utilizes active site profiling to separate structurally characterized superfamily members into functionally relevant clusters. Underlying TuLIP is the observation that functionally relevant families (curated by Structure‐Function Linkage Database, SFLD) self‐identify in DASP2 searches; clusters containing multiple functional families do not. Each TuLIP iteration produces candidate clusters, each evaluated to determine if it self‐identifies using DASP2. If so, it is deemed a functionally relevant group. Divisive clustering continues until each structure is either a functionally relevant group member or a singlet. TuLIP is validated on enolase and glutathione transferase structures, superfamilies well‐curated by SFLD. Correlation is strong; small numbers of structures prevent statistically significant analysis. TuLIP‐identified enolase clusters are used in DASP2 GenBank searches to identify sequences sharing functional site features. Analysis shows a true positive rate of 96%, false negative rate of 4%, and maximum false positive rate of 4%. F‐measure and performance analysis on the enolase search results and comparison to GEMMA and SCI‐PHY demonstrate that TuLIP avoids the over‐division problem of these methods. Mechanistic determinants for enolase families are evaluated and shown to correlate well with literature results. PMID:28054422
Shashi, Vandana; Schoch, Kelly; Spillmann, Rebecca; Cope, Heidi; Tan, Queenie K-G; Walley, Nicole; Pena, Loren; McConkie-Rosell, Allyn; Jiang, Yong-Hui; Stong, Nicholas; Need, Anna C; Goldstein, David B
2018-06-15
Sixty to seventy-five percent of individuals with rare and undiagnosed phenotypes remain undiagnosed after exome sequencing (ES). With standard ES reanalysis resolving 10-15% of the ES negatives, further approaches are necessary to maximize diagnoses in these individuals. In 38 ES negative patients an individualized genomic-phenotypic approach was employed utilizing (1) phenotyping; (2) reanalyses of FASTQ files, with innovative bioinformatics; (3) targeted molecular testing; (4) genome sequencing (GS); and (5) conferring of clinical diagnoses when pathognomonic clinical findings occurred. Certain and highly likely diagnoses were made in 18/38 (47%) individuals, including identifying two new developmental disorders. The majority of diagnoses (>70%) were due to our bioinformatics, phenotyping, and targeted testing identifying variants that were undetected or not prioritized on prior ES. GS diagnosed 3/18 individuals with structural variants not amenable to ES. Additionally, tentative diagnoses were made in 3 (8%), and in 5 individuals (13%) candidate genes were identified. Overall, diagnoses/potential leads were identified in 26/38 (68%). Our comprehensive approach to ES negatives maximizes the ES and clinical data for both diagnoses and candidate gene identification, without GS in the majority. This iterative approach is cost-effective and is pertinent to the current conundrum of ES negatives.
Developing a taxonomy for mission architecture definition
NASA Technical Reports Server (NTRS)
Neubek, Deborah J.
1990-01-01
The Lunar and Mars Exploration Program Office (LMEPO) was tasked to define candidate architectures for the Space Exploration Initiative to submit to NASA senior management and an externally constituted Outreach Synthesis Group. A systematic, structured process for developing, characterizing, and describing the alternate mission architectures, and applying this process to future studies was developed. The work was done in two phases: (1) national needs were identified and categorized into objectives achievable by the Space Exploration Initiative; and (2) a program development process was created which both hierarchically and iteratively describes the program planning process.
Du, Qi-Shi; Huang, Ri-Bo; Wei, Yu-Tuo; Pang, Zong-Wen; Du, Li-Qin; Chou, Kuo-Chen
2009-01-30
In cooperation with the fragment-based design a new drug design method, the so-called "fragment-based quantitative structure-activity relationship" (FB-QSAR) is proposed. The essence of the new method is that the molecular framework in a family of drug candidates are divided into several fragments according to their substitutes being investigated. The bioactivities of molecules are correlated with the physicochemical properties of the molecular fragments through two sets of coefficients in the linear free energy equations. One coefficient set is for the physicochemical properties and the other for the weight factors of the molecular fragments. Meanwhile, an iterative double least square (IDLS) technique is developed to solve the two sets of coefficients in a training data set alternately and iteratively. The IDLS technique is a feedback procedure with machine learning ability. The standard Two-dimensional quantitative structure-activity relationship (2D-QSAR) is a special case, in the FB-QSAR, when the whole molecule is treated as one entity. The FB-QSAR approach can remarkably enhance the predictive power and provide more structural insights into rational drug design. As an example, the FB-QSAR is applied to build a predictive model of neuraminidase inhibitors for drug development against H5N1 influenza virus. (c) 2008 Wiley Periodicals, Inc.
A Preliminary Examination of the In-Training Evaluation Report
ERIC Educational Resources Information Center
Skakun, Ernest N.; And Others
1975-01-01
The In-Training Evaluation Report (ITER), in use by the Royal College of Physicians and Surgeons of Canada for examining the competencies of candidates eligible for the certifying examination, was tested for validity and reliability. This analysis suggests revisions but declares the ITEA a useful instrument to aid in candidate assessment. (JT)
Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting
NASA Astrophysics Data System (ADS)
Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.
2012-02-01
We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.
Hydroelectric voltage generation based on water-filled single-walled carbon nanotubes.
Yuan, Quanzi; Zhao, Ya-Pu
2009-05-13
A DFT/MD mutual iterative method was employed to give insights into the mechanism of voltage generation based on water-filled single-walled carbon nanotubes (SWCNTs). Our calculations showed that a constant voltage difference of several mV would generate between the two ends of a carbon nanotube, due to interactions between the water dipole chains and charge carriers in the tube. Our work validates this structure of a water-filled SWCNT as a promising candidate for a synthetic nanoscale power cell, as well as a practical nanopower harvesting device at the atomic level.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Feedback-Driven Dynamic Invariant Discovery
NASA Technical Reports Server (NTRS)
Zhang, Lingming; Yang, Guowei; Rungta, Neha S.; Person, Suzette; Khurshid, Sarfraz
2014-01-01
Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further re ne the set of candidate invariants. This feedback loop is executed until a x-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.
Computer-assisted expert case definition in electronic health records.
Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F
2016-02-01
To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.
Dastmalchi, Pouya; Veronis, Georgios
2013-12-30
We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.
ACT Payload Shroud Structural Concept Analysis and Optimization
NASA Technical Reports Server (NTRS)
Zalewski, Bart B.; Bednarcyk, Brett A.
2010-01-01
Aerospace structural applications demand a weight efficient design to perform in a cost effective manner. This is particularly true for launch vehicle structures, where weight is the dominant design driver. The design process typically requires many iterations to ensure that a satisfactory minimum weight has been obtained. Although metallic structures can be weight efficient, composite structures can provide additional weight savings due to their lower density and additional design flexibility. This work presents structural analysis and weight optimization of a composite payload shroud for NASA s Ares V heavy lift vehicle. Two concepts, which were previously determined to be efficient for such a structure are evaluated: a hat stiffened/corrugated panel and a fiber reinforced foam sandwich panel. A composite structural optimization code, HyperSizer, is used to optimize the panel geometry, composite material ply orientations, and sandwich core material. HyperSizer enables an efficient evaluation of thousands of potential designs versus multiple strength and stability-based failure criteria across multiple load cases. HyperSizer sizing process uses a global finite element model to obtain element forces, which are statistically processed to arrive at panel-level design-to loads. These loads are then used to analyze each candidate panel design. A near optimum design is selected as the one with the lowest weight that also provides all positive margins of safety. The stiffness of each newly sized panel or beam component is taken into account in the subsequent finite element analysis. Iteration of analysis/optimization is performed to ensure a converged design. Sizing results for the hat stiffened panel concept and the fiber reinforced foam sandwich concept are presented.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
Principles of Temporal Processing Across the Cortical Hierarchy.
Himberger, Kevin D; Chien, Hsiang-Yun; Honey, Christopher J
2018-05-02
The world is richly structured on multiple spatiotemporal scales. In order to represent spatial structure, many machine-learning models repeat a set of basic operations at each layer of a hierarchical architecture. These iterated spatial operations - including pooling, normalization and pattern completion - enable these systems to recognize and predict spatial structure, while robust to changes in the spatial scale, contrast and noisiness of the input signal. Because our brains also process temporal information that is rich and occurs across multiple time scales, might the brain employ an analogous set of operations for temporal information processing? Here we define a candidate set of temporal operations, and we review evidence that they are implemented in the mammalian cerebral cortex in a hierarchical manner. We conclude that multiple consecutive stages of cortical processing can be understood to perform temporal pooling, temporal normalization and temporal pattern completion. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
A fast and robust iterative algorithm for prediction of RNA pseudoknotted secondary structures
2014-01-01
Background Improving accuracy and efficiency of computational methods that predict pseudoknotted RNA secondary structures is an ongoing challenge. Existing methods based on free energy minimization tend to be very slow and are limited in the types of pseudoknots that they can predict. Incorporating known structural information can improve prediction accuracy; however, there are not many methods for prediction of pseudoknotted structures that can incorporate structural information as input. There is even less understanding of the relative robustness of these methods with respect to partial information. Results We present a new method, Iterative HFold, for pseudoknotted RNA secondary structure prediction. Iterative HFold takes as input a pseudoknot-free structure, and produces a possibly pseudoknotted structure whose energy is at least as low as that of any (density-2) pseudoknotted structure containing the input structure. Iterative HFold leverages strengths of earlier methods, namely the fast running time of HFold, a method that is based on the hierarchical folding hypothesis, and the energy parameters of HotKnots V2.0. Our experimental evaluation on a large data set shows that Iterative HFold is robust with respect to partial information, with average accuracy on pseudoknotted structures steadily increasing from roughly 54% to 79% as the user provides up to 40% of the input structure. Iterative HFold is much faster than HotKnots V2.0, while having comparable accuracy. Iterative HFold also has significantly better accuracy than IPknot on our HK-PK and IP-pk168 data sets. Conclusions Iterative HFold is a robust method for prediction of pseudoknotted RNA secondary structures, whose accuracy with more than 5% information about true pseudoknot-free structures is better than that of IPknot, and with about 35% information about true pseudoknot-free structures compares well with that of HotKnots V2.0 while being significantly faster. Iterative HFold and all data used in this work are freely available at http://www.cs.ubc.ca/~hjabbari/software.php. PMID:24884954
Cryogenic Properties of Inorganic Insulation Materials for ITER Magnets: A Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, N.J.
1994-12-01
Results of a literature search on the cryogenic properties of candidate inorganic insulators for the ITER TF magnets are reported. The materials investigated include: Al{sub 2}O{sub 3}, AlN, MgO, porcelain, SiO{sub 2}, MgAl{sub 2}O{sub 4}, ZrO{sub 2}, and mica. A graphical presentation is given of mechanical, elastic, electrical, and thermal properties between 4 and 300 K. A companion report reviews the low temperature irradiation resistance of these materials.
Steady state numerical solutions for determining the location of MEMS on projectile
NASA Astrophysics Data System (ADS)
Abiprayu, K.; Abdigusna, M. F. F.; Gunawan, P. H.
2018-03-01
This paper is devoted to compare the numerical solutions for the steady and unsteady state heat distribution model on projectile. Here, the best location for installing of the MEMS on the projectile based on the surface temperature is investigated. Numerical iteration methods, Jacobi and Gauss-Seidel have been elaborated to solve the steady state heat distribution model on projectile. The results using Jacobi and Gauss-Seidel are shown identical but the discrepancy iteration cost for each methods is gained. Using Jacobi’s method, the iteration cost is 350 iterations. Meanwhile, using Gauss-Seidel 188 iterations are obtained, faster than the Jacobi’s method. The comparison of the simulation by steady state model and the unsteady state model by a reference is shown satisfying. Moreover, the best candidate for installing MEMS on projectile is observed at pointT(10, 0) which has the lowest temperature for the other points. The temperature using Jacobi and Gauss-Seidel for scenario 1 and 2 atT(10, 0) are 307 and 309 Kelvin respectively.
Radioactivity measurements of ITER materials using the TFTR D-T neutron field
NASA Astrophysics Data System (ADS)
Kumar, A.; Abdou, M. A.; Barnes, C. W.; Kugel, H. W.
1994-06-01
The availability of high D-T fusion neutron yields at TFTR has provided a useful opportunity to directly measure D-T neutron-induced radioactivity in a realistic tokamak fusion reactor environment for materials of vital interest to ITER. These measurements are valuable for characterizing radioactivity in various ITER candidate materials, for validating complex neutron transport calculations, and for meeting fusion reactor licensing requirements. The radioactivity measurements at TFTR involve potential ITER materials including stainless steel 316, vanadium, titanium, chromium, silicon, iron, cobalt, nickel, molybdenum, aluminum, copper, zinc, zirconium, niobium, and tungsten. Small samples of these materials were irradiated close to the plasma and just outside the vacuum vessel wall of TFTR, locations of different neutron energy spectra. Saturation activities for both threshold and capture reactions were measured. Data from dosimetric reactions have been used to obtain preliminary neutron energy spectra. Spectra from the first wall were compared to calculations from ITER and to measurements from accelerator-based tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. C. Cadwallader; C. P. C. Wong; M. Abdou
2014-10-01
A leading power reactor breeding blanket candidate for a fusion demonstration power plant (DEMO) being pursued by the US Fusion Community is the Dual Coolant Lead Lithium (DCLL) concept. The safety hazards associated with the DCLL concept as a reactor blanket have been examined in several US design studies. These studies identify the largest radiological hazards as those associated with the dust generation by plasma erosion of plasma blanket module first walls, oxidation of blanket structures at high temperature in air or steam, inventories of tritium bred in or permeating through the ferritic steel structures of the blanket module andmore » blanket support systems, and the 210Po and 203Hg produced in the PbLi breeder/coolant. What these studies lack is the scrutiny associated with a licensing review of the DCLL concept. An insight into this process was gained during the US participation in the International Thermonuclear Experimental Reactor (ITER) Test Blanket Module (TBM) Program. In this paper we discuss the lessons learned during this activity and make safety proposals for the design of a Fusion Nuclear Science Facility (FNSF) or a DEMO that employs a lead lithium breeding blanket.« less
ITER CS Intermodule Support Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myatt, R.; Freudenberg, Kevin D
2011-01-01
With five independently driven, bi-polarity power supplies, the modules of the ITER central solenoid (CS) can be energized in aligned or opposing field directions. This sets up the possibility for repelling modules, which indeed occurs, particularly between CS2L and CS3L around the End of Burn (EOB) time point. Light interface compression between these two modules at EOB and wide variations in these coil currents throughout the pulse produce a tendency for relative motion or slip. Ideally, the slip is purely radial as the modules breathe without any accumulative translational motion. In reality, however, asymmetries such as nonuniformity in intermodule friction,more » lateral loads from a plasma Vertical Disruption Event (VDE), magnetic forces from manufacturing and assembly tolerances, and earthquakes can all contribute to a combination of radial and lateral module motion. This paper presents 2D and 3D, nonlinear, ANSYS models which simulate these various asymmetries and determine the lateral forces which must be carried by the intermodule structure. Summing all of these asymmetric force contributions leads to a design-basis lateral load which is used in the design of various support concepts: the CS-CDR centering rings and a variation, the 2001 FDR baseline radial keys, and interlocking castles structures. Radial key-type intermodule structure interface slip and stresses are tracked through multiple 15 MA scenario current pulses to demonstrate stable motion following the first few cycles. Detractions and benefits of each candidate intermodule structure are discussed, leading to the simplest and most robust configuration which meets the design requirements: match-drilled radial holes and pin-shaped keys.« less
Carbon fiber composites application in ITER plasma facing components
NASA Astrophysics Data System (ADS)
Barabash, V.; Akiba, M.; Bonal, J. P.; Federici, G.; Matera, R.; Nakamura, K.; Pacher, H. D.; Rödig, M.; Vieider, G.; Wu, C. H.
1998-10-01
Carbon Fiber Composites (CFCs) are one of the candidate armour materials for the plasma facing components of the International Thermonuclear Experimental Reactor (ITER). For the present reference design, CFC has been selected as armour for the divertor target near the plasma strike point mainly because of unique resistance to high normal and off-normal heat loads. It does not melt under disruptions and might have higher erosion lifetime in comparison with other possible armour materials. Issues related to CFC application in ITER are described in this paper. They include erosion lifetime, tritium codeposition with eroded material and possible methods for the removal of the codeposited layers, neutron irradiation effect, development of joining technologies with heat sink materials, and thermomechanical performance. The status of the development of new advanced CFCs for ITER application is also described. Finally, the remaining R&D needs are critically discussed.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation.
Koyuncu, Can Fahrettin; Akhan, Ece; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2016-04-01
Automated microscopy imaging systems facilitate high-throughput screening in molecular cellular biology research. The first step of these systems is cell nucleus segmentation, which has a great impact on the success of the overall system. The marker-controlled watershed is a technique commonly used by the previous studies for nucleus segmentation. These studies define their markers finding regional minima on the intensity/gradient and/or distance transform maps. They typically use the h-minima transform beforehand to suppress noise on these maps. The selection of the h value is critical; unnecessarily small values do not sufficiently suppress the noise, resulting in false and oversegmented markers, and unnecessarily large ones suppress too many pixels, causing missing and undersegmented markers. Because cell nuclei show different characteristics within an image, the same h value may not work to define correct markers for all the nuclei. To address this issue, in this work, we propose a new watershed algorithm that iteratively identifies its markers, considering a set of different h values. In each iteration, the proposed algorithm defines a set of candidates using a particular h value and selects the markers from those candidates provided that they fulfill the size requirement. Working with widefield fluorescence microscopy images, our experiments reveal that the use of multiple h values in our iterative algorithm leads to better segmentation results, compared to its counterparts. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Comparing direct and iterative equation solvers in a large structural analysis software system
NASA Technical Reports Server (NTRS)
Poole, E. L.
1991-01-01
Two direct Choleski equation solvers and two iterative preconditioned conjugate gradient (PCG) equation solvers used in a large structural analysis software system are described. The two direct solvers are implementations of the Choleski method for variable-band matrix storage and sparse matrix storage. The two iterative PCG solvers include the Jacobi conjugate gradient method and an incomplete Choleski conjugate gradient method. The performance of the direct and iterative solvers is compared by solving several representative structural analysis problems. Some key factors affecting the performance of the iterative solvers relative to the direct solvers are identified.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Conservative and bounded volume-of-fluid advection on unstructured grids
NASA Astrophysics Data System (ADS)
Ivey, Christopher B.; Moin, Parviz
2017-12-01
This paper presents a novel Eulerian-Lagrangian piecewise-linear interface calculation (PLIC) volume-of-fluid (VOF) advection method, which is three-dimensional, unsplit, and discretely conservative and bounded. The approach is developed with reference to a collocated node-based finite-volume two-phase flow solver that utilizes the median-dual mesh constructed from non-convex polyhedra. The proposed advection algorithm satisfies conservation and boundedness of the liquid volume fraction irrespective of the underlying flux polyhedron geometry, which differs from contemporary unsplit VOF schemes that prescribe topologically complicated flux polyhedron geometries in efforts to satisfy conservation. Instead of prescribing complicated flux-polyhedron geometries, which are prone to topological failures, our VOF advection scheme, the non-intersecting flux polyhedron advection (NIFPA) method, builds the flux polyhedron iteratively such that its intersection with neighboring flux polyhedra, and any other unavailable volume, is empty and its total volume matches the calculated flux volume. During each iteration, a candidate nominal flux polyhedron is extruded using an iteration dependent scalar. The candidate is subsequently intersected with the volume guaranteed available to it at the time of the flux calculation to generate the candidate flux polyhedron. The difference in the volume of the candidate flux polyhedron and the actual flux volume is used to calculate extrusion during the next iteration. The choice in nominal flux polyhedron impacts the cost and accuracy of the scheme; however, it does not impact the methods underlying conservation and boundedness. As such, various robust nominal flux polyhedron are proposed and tested using canonical periodic kinematic test cases: Zalesak's disk and two- and three-dimensional deformation. The tests are conducted on the median duals of a quadrilateral and triangular primal mesh, in two-dimensions, and on the median duals of a hexahedral, wedge and tetrahedral primal mesh, in three-dimensions. Comparisons are made with the adaptation of a conventional unsplit VOF advection scheme to our collocated node-based flow solver. Depending on the choice in the nominal flux polyhedron, the NIFPA scheme presented accuracies ranging from zeroth to second order and calculation times that differed by orders of magnitude. For the nominal flux polyhedra which demonstrate second-order accuracy on all tests and meshes, the NIFPA method's cost was comparable to the traditional topologically complex second-order accurate VOF advection scheme.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
Gardner, Aimee K; Dunkin, Brian J
2018-05-01
As current screening methods for selecting surgical trainees are receiving increasing scrutiny, development of a more efficient and effective selection system is needed. We describe the process of creating an evidence-based selection system and examine its impact on screening efficiency, faculty perceptions, and improving representation of underrepresented minorities. The program partnered with an expert in organizational science to identify fellowship position requirements and associated competencies. Situational judgment tests, personality profiles, structured interviews, and technical skills assessments were used to measure these competencies. The situational judgment test and personality profiles were administered online and used to identify candidates to invite for on-site structured interviews and skills testing. A final rank list was created based on all data points and their respective importance. All faculty completed follow-up surveys regarding their perceptions of the process. Candidate demographic and experience data were pulled from the application website. Fifty-five of 72 applicants met eligibility requirements and were invited to take the online assessment, with 50 (91%) completing it. Average time to complete was 42 ± 12 minutes. Eighteen applicants (35%) were invited for on-site structured interviews and skills testing-a greater than 50% reduction in number of invites compared to prior years. Time estimates reveal that the process will result in a time savings of 68% for future iterations, compared to traditional methodologies. Fellowship faculty (N = 5) agreed on the value and efficiency of the process. Underrepresented minority candidates increased from an initial 70% to 92% being invited for an interview and ranked using the new screening tools. Applying selection science to the process of choosing surgical trainees is feasible, efficient, and well-received by faculty for making selection decisions.
NASA Astrophysics Data System (ADS)
Burky, A.; Irving, J. C. E.; Simons, F.
2017-12-01
The Bermuda Rise is an enigmatic intraplate bathymetric feature which is considered a candidate hotspot in some catalogs, but remains a poor candidate due to the lack of an associated seamount chain and the absence of any present-day volcanism. Tomographic models of the seismic P and S wave velocity structure in the upper mantle and transition zone beneath Bermuda and the surrounding seafloor consistently resolve low velocity structures, but the magnitude, lateral dimensions, and position of these low velocity structures vary considerably between models. Due to these discrepancies, it remains difficult to attribute the observed velocity anomalies to thermal or chemical heterogeneity in this region. In addition to tomographic modeling, previous studies investigated the mantle transition zone structure beneath Bermuda by calculating receiver functions for GSN station BBSR, and suggested thinning of the transition zone as well as depressed discontinuity topography. In this study, we expand upon those studies by including the wealth of newly available data, and by incorporating a suite of three-dimensional velocity models. We calculate radial receiver functions in multiple frequency bands for the highest quality seismograms selected from over 5,000 waveforms recorded at station BBSR between October 2008 and August 2017 using the iterative deconvolution technique. We use various one- and three-dimensional velocity models to depth-convert our receiver functions to find the depths of the mantle transition zone discontinuities responsible for the signals in our receiver functions. The observed discontinuity topography is interpreted in the context of candidate mineralogical phase transitions and mantle temperature. To gain a more comprehensive understanding of our observations, we also calculate synthetic seismograms using AxiSEM, compute radial receiver functions for these synthetic data, and compare the results to the real receiver functions. Lastly, we discuss our results in the context of the geologic and geodynamic history of the Bermuda Rise.
A One-Piece Lunar Regolith-Bag Garage Prototype
NASA Technical Reports Server (NTRS)
Smithers, Gweneth A.; Nehls, Mary K.; Hovater, Mary A.; Evans, Steven W.; Miller, J. Scott; Broughton, Roy M., Jr.; Beale, David; Killinc-Balci, Fatma
2006-01-01
Shelter structures on the moon, even in early phases of exploration, should incorporate lunar materials as much as possible. We designed and constructed a prototype for a one-piece regolith-bag unpressurized garage concept, and, in parallel, we conducted a materials testing program to investigate six candidate fabrics to learn how they might perform in the lunar environment. In our concept, a lightweight fabric form is launched from Earth to be landed on the lunar surface and robotically filled with raw lunar regolith. In the materials testing program, regolith-bag fabric candidates included: VectranTM, NextelTM, Gore PTFE FabricTM, ZylonTM TwaronTM and NomexTM. Tensile (including post radiation exposure), fold, abrasion, and hypervelocity impact testing were performed under ambient conditions, and, within our current means, we also performed these tests under cold and elevated temperatures. In some cases, lunar simulant (JSC-1) was used in conjunction with testing. Our ambition is to continuously refine our testing to reach lunar environmental conditions to the extent possible. A series of preliminary structures were constructed during design of the final prototype. Design is based on the principles of the classic masonry arch. The prototype was constructed of KevlarTM and filled with vermiculite (fairly close to the weight of lunar regolith on the moon). The structure is free-standing, but has not yet been load tested. Our plan for the future would be to construct higher fidelty mockups with each iteration, and to conduct appropriate tests of the structure.
A One-Piece Lunar Regolith-Bag Garage Prototype
NASA Technical Reports Server (NTRS)
Smithers, Gweneth A.; Nehls, Mary K.; Hovater, Mary A.; Evans, Steven W.; Miller, J. Scott; Broughton, Roy M.; Beale, David; Killing-Balci, Fatma
2007-01-01
Shelter structures on the moon, even in early phases of exploration, should incorporate lunar materials as much as possible. We designed and constructed a prototype for a one-piece regolith-bag unpressurized garage concept, and, in parallel, we conducted a materials testing program to investigate six candidate fabrics to learn how they might perform in the lunar environment. In our concept, a lightweight fabric form is launched from Earth to be landed on the lunar surface and robotically filled with raw lunar regolith. In the materials testing program, regolith-bag fabric candidates included: Vectran(TM), Nextel(TM), Gore PTFE Fabric(TM), Zylon(TM), Twaron(TM), and Nomex(TM). Tensile (including post radiation exposure), fold, abrasion, and hypervelocity impact testing were performed under ambient conditions, and, within our current means, we also performed these tests under cold and elevated temperatures. In some cases, lunar simulant (JSC-1) was used in conjunction with testing. Our ambition is to continuously refine our testing to reach lunar environmental conditions to the extent possible. A series of preliminary structures were constructed during design of the final prototype. Design is based on the principles of the classic masonry arch. The prototype was constructed of Kevlar(TM) and filled with vermiculite (fairly close to the weight of lunar regolith on the moon). The structure is free-standing, but has not yet been load tested. Our plan for the future would be to construct higher fidelity mockups with each iteration, and to conduct appropriate tests of the structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.R. Hudson; D.A. Monticello; A.H. Reiman
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.
2003-10-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).
Change Detection in High-Resolution Remote Sensing Images Using Levene-Test and Fuzzy Evaluation
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Liu, H. J.
2018-04-01
High-resolution remote sensing images possess complex spatial structure and rich texture information, according to these, this paper presents a new method of change detection based on Levene-Test and Fuzzy Evaluation. It first got map-spots by segmenting two overlapping images which had been pretreated, extracted features such as spectrum and texture. Then, changed information of all map-spots which had been treated by the Levene-Test were counted to obtain the candidate changed regions, hue information (H component) was extracted through the IHS Transform and conducted change vector analysis combined with the texture information. Eventually, the threshold was confirmed by an iteration method, the subject degrees of candidate changed regions were calculated, and final change regions were determined. In this paper experimental results on multi-temporal ZY-3 high-resolution images of some area in Jiangsu Province show that: Through extracting map-spots of larger difference as the candidate changed regions, Levene-Test decreases the computing load, improves the precision of change detection, and shows better fault-tolerant capacity for those unchanged regions which are of relatively large differences. The combination of Hue-texture features and fuzzy evaluation method can effectively decrease omissions and deficiencies, improve the precision of change detection.
Postirradiation thermocyclic loading of ferritic-martensitic structural materials
NASA Astrophysics Data System (ADS)
Belyaeva, L.; Orychtchenko, A.; Petersen, C.; Rybin, V.
Thermonuclear fusion reactors of the Tokamak-type will be unique power engineering plants to operate in thermocyclic mode only. Ferritic-martensitic stainless steels are prime candidate structural materials for test blankets of the ITER fusion reactor. Beyond the radiation damage, thermomechanical cyclic loading is considered as the most detrimental lifetime limiting phenomenon for the above structure. With a Russian and a German facility for thermal fatigue testing of neutron irradiated materials a cooperation has been undertaken. Ampule devices to irradiate specimens for postirradiation thermal fatigue tests have been developed by the Russian partner. The irradiation of these ampule devices loaded with specimens of ferritic-martensitic steels, like the European MANET-II, the Russian 05K12N2M and the Japanese Low Activation Material F82H-mod, in a WWR-M-type reactor just started. A description of the irradiation facility, the qualification of the ampule device and the modification of the German thermal fatigue facility will be presented.
Optimization of a Lunar Pallet Lander Reinforcement Structure Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Burt, Adam O.; Hull, Patrick V.
2014-01-01
This paper presents a design automation process using optimization via a genetic algorithm to design the conceptual structure of a Lunar Pallet Lander. The goal is to determine a design that will have the primary natural frequencies at or above a target value as well as minimize the total mass. Several iterations of the process are presented. First, a concept optimization is performed to determine what class of structure would produce suitable candidate designs. From this a stiffened sheet metal approach was selected leading to optimization of beam placement through generating a two-dimensional mesh and varying the physical location of reinforcing beams. Finally, the design space is reformulated as a binary problem using 1-dimensional beam elements to truncate the design space to allow faster convergence and additional mechanical failure criteria to be included in the optimization responses. Results are presented for each design space configuration. The final flight design was derived from these results.
AMICO: optimized detection of galaxy clusters in photometric surveys
NASA Astrophysics Data System (ADS)
Bellagamba, Fabio; Roncarelli, Mauro; Maturi, Matteo; Moscardini, Lauro
2018-02-01
We present Adaptive Matched Identifier of Clustered Objects (AMICO), a new algorithm for the detection of galaxy clusters in photometric surveys. AMICO is based on the Optimal Filtering technique, which allows to maximize the signal-to-noise ratio (S/N) of the clusters. In this work, we focus on the new iterative approach to the extraction of cluster candidates from the map produced by the filter. In particular, we provide a definition of membership probability for the galaxies close to any cluster candidate, which allows us to remove its imprint from the map, allowing the detection of smaller structures. As demonstrated in our tests, this method allows the deblending of close-by and aligned structures in more than 50 per cent of the cases for objects at radial distance equal to 0.5 × R200 or redshift distance equal to 2 × σz, being σz the typical uncertainty of photometric redshifts. Running AMICO on mocks derived from N-body simulations and semi-analytical modelling of the galaxy evolution, we obtain a consistent mass-amplitude relation through the redshift range of 0.3 < z < 1, with a logarithmic slope of ∼0.55 and a logarithmic scatter of ∼0.14. The fraction of false detections is steeply decreasing with S/N and negligible at S/N > 5.
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-10
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.
Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram
2014-01-01
Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343
Design of the DEMO Fusion Reactor Following ITER.
Garabedian, Paul R; McFadden, Geoffrey B
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task.
Design of the DEMO Fusion Reactor Following ITER
Garabedian, Paul R.; McFadden, Geoffrey B.
2009-01-01
Runs of the NSTAB nonlinear stability code show there are many three-dimensional (3D) solutions of the advanced tokamak problem subject to axially symmetric boundary conditions. These numerical simulations based on mathematical equations in conservation form predict that the ITER international tokamak project will encounter persistent disruptions and edge localized mode (ELMS) crashes. Test particle runs of the TRAN transport code suggest that for quasineutrality to prevail in tokamaks a certain minimum level of 3D asymmetry of the magnetic spectrum is required which is comparable to that found in quasiaxially symmetric (QAS) stellarators. The computational theory suggests that a QAS stellarator with two field periods and proportions like those of ITER is a good candidate for a fusion reactor. For a demonstration reactor (DEMO) we seek an experiment that combines the best features of ITER, with a system of QAS coils providing external rotational transform, which is a measure of the poloidal field. We have discovered a configuration with unusually good quasisymmetry that is ideal for this task. PMID:27504224
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
2015-06-15
Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
Khattak, Naureen Aslam; Mir, Asif
2014-01-01
Mental retardation (MR)/ intellectual disability (ID) is a neuro-developmental disorder characterized by a low intellectual quotient (IQ) and deficits in adaptive behavior related to everyday life tasks such as delayed language acquisition, social skills or self-help skills with onset before age 18. To date, a few genes (PRSS12, CRBN, CC2D1A, GRIK2, TUSC3, TRAPPC9, TECR, ST3GAL3, MED23, MAN1B1, NSUN1) for autosomal-recessive forms of non syndromic MR (NS-ARMR) have been identified and established in various families with ID. The recently reported candidate gene TRAPPC9 was selected for computational analysis to explore its potentially important role in pathology as it is the only gene for ID reported in more than five different familial cases worldwide. YASARA (12.4.1) was utilized to generate three dimensional structures of the candidate gene TRAPPC9. Hybrid structure prediction was employed. Crystal Structure of a Conserved Metalloprotein From Bacillus Cereus (3D19-C) was selected as best suitable template using position-specific iteration-BLAST. Template (3D19-C) parameters were based on E-value, Z-score and resolution and quality score of 0.32, -1.152, 2.30°A and 0.684 respectively. Model reliability showed 93.1% residues placed in the most favored region with 96.684 quality factor, and overall 0.20 G-factor (dihedrals 0.06 and covalent 0.39 respectively). Protein-Protein docking analysis demonstrated that TRAPPC9 showed strong interactions of the amino acid residues S(253), S(251), Y(256), G(243), D(131) with R(105), Q(425), W(226), N(255), S(233), its functional partner 1KBKB. Protein-protein interacting residues could facilitate the exploration of structural and functional outcomes of wild type and mutated TRAPCC9 protein. Actively involved residues can be used to elucidate the binding properties of the protein, and to develop drug therapy for NS-ARMR patients.
Electromagnetic scattering of large structures in layered earths using integral equations
NASA Astrophysics Data System (ADS)
Xiong, Zonghou; Tripp, Alan C.
1995-07-01
An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.
Saver, Jeffrey L.; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W.; Duncan, Pamela; Elkind, Mitchell S. V.; Johnston, Karen; Kidwell, Chelsea S.; Meschia, James F.; Schwamm, Lee
2012-01-01
Background and Purpose The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. Methods A Working Group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of Stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised based on comments from leading national and international neurovascular research organizations and the public. Results The first iteration of the NINDS stroke-specific CDEs comprises 980 data elements spanning nine content areas: 1) Biospecimens and Biomarkers; 2) Hospital Course and Acute Therapies; 3) Imaging; 4) Laboratory Tests and Vital Signs; 5) Long Term Therapies; 6) Medical History and Prior Health Status; 7) Outcomes and Endpoints; 8) Stroke Presentation; 9) Stroke Types and Subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms (CRFs) using the CDEs. Conclusion Stroke-specific CDEs are now available as standardized, scientifically-vetted variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings. PMID:22308239
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.
2003-06-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].
Engineering and manufacturing of ITER first mirror mock-ups.
Joanny, M; Travère, J M; Salasca, S; Corre, Y; Marot, L; Thellier, C; Gallay, G; Cammarata, C; Passier, B; Fermé, J J
2010-10-01
Most of the ITER optical diagnostics aiming at viewing and monitoring plasma facing components will use in-vessel metallic mirrors. These mirrors will be exposed to a severe plasma environment and lead to an important tradeoff on their design and manufacturing. As a consequence, investigations are carried out on diagnostic mirrors toward the development of optimal and reliable solutions. The goals are to assess the manufacturing feasibility of the mirror coatings, evaluate the manufacturing capability and associated performances for the mirrors cooling and polishing, and finally determine the costs and delivery time of the first prototypes with a diameter of 200 and 500 mm. Three kinds of ITER candidate mock-ups are being designed and manufactured: rhodium films on stainless steel substrate, molybdenum on TZM substrate, and silver films on stainless steel substrate. The status of the project is presented in this paper.
DIII-D accomplishments and plans in support of fusion next steps
Buttery, R. J; Eidietis, N.; Holcomb, C.; ...
2013-06-01
DIII-D is using its flexibility and diagnostics to address the critical science required to enable next step fusion devices. We have adapted operating scenarios for ITER to low torque and are now being optimized for transport. Three ELM mitigation scenarios have been developed to near-ITER parameters. New control techniques are managing the most challenging plasma instabilities. Disruption mitigation tools show promising dissipation strategies for runaway electrons and heat load. An off axis neutral beam upgrade has enabled sustainment of high βN capable steady state regimes. Divertor research is identifying the challenge, physics and candidate solutions for handling the hot plasmamore » exhaust with notable progress in heat flux reduction using the snowflake configuration. Our work is helping optimize design choices and prepare the scientific tools for operation in ITER, and resolve key elements of the plasma configuration and divertor solution for an FNSF.« less
A novel Iterative algorithm to text segmentation for web born-digital images
NASA Astrophysics Data System (ADS)
Xu, Zhigang; Zhu, Yuesheng; Sun, Ziqiang; Liu, Zhen
2015-07-01
Since web born-digital images have low resolution and dense text atoms, text region over-merging and miss detection are still two open issues to be addressed. In this paper a novel iterative algorithm is proposed to locate and segment text regions. In each iteration, the candidate text regions are generated by detecting Maximally Stable Extremal Region (MSER) with diminishing thresholds, and categorized into different groups based on a new similarity graph, and the texted region groups are identified by applying several features and rules. With our proposed overlap checking method the final well-segmented text regions are selected from these groups in all iterations. Experiments have been carried out on the web born-digital image datasets used for robust reading competition in ICDAR 2011 and 2013, and the results demonstrate that our proposed scheme can significantly reduce both the number of over-merge regions and the lost rate of target atoms, and the overall performance outperforms the best compared with the methods shown in the two competitions in term of recall rate and f-score at the cost of slightly higher computational complexity.
Wei, Jianming; Zhang, Youan; Sun, Meimei; Geng, Baoliang
2017-09-01
This paper presents an adaptive iterative learning control scheme for a class of nonlinear systems with unknown time-varying delays and control direction preceded by unknown nonlinear backlash-like hysteresis. Boundary layer function is introduced to construct an auxiliary error variable, which relaxes the identical initial condition assumption of iterative learning control. For the controller design, integral Lyapunov function candidate is used, which avoids the possible singularity problem by introducing hyperbolic tangent funciton. After compensating for uncertainties with time-varying delays by combining appropriate Lyapunov-Krasovskii function with Young's inequality, an adaptive iterative learning control scheme is designed through neural approximation technique and Nussbaum function method. On the basis of the hyperbolic tangent function's characteristics, the system output is proved to converge to a small neighborhood of the desired trajectory by constructing Lyapunov-like composite energy function (CEF) in two cases, while keeping all the closed-loop signals bounded. Finally, a simulation example is presented to verify the effectiveness of the proposed approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Zheng, Jingjing; Frisch, Michael J
2017-12-12
An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.
Convergence of an iterative procedure for large-scale static analysis of structural components
NASA Technical Reports Server (NTRS)
Austin, F.; Ojalvo, I. U.
1976-01-01
The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.
Depression as a systemic syndrome: mapping the feedback loops of major depressive disorder.
Wittenborn, A K; Rahmandad, H; Rick, J; Hosseinichimeh, N
2016-02-01
Depression is a complex public health problem with considerable variation in treatment response. The systemic complexity of depression, or the feedback processes among diverse drivers of the disorder, contribute to the persistence of depression. This paper extends prior attempts to understand the complex causal feedback mechanisms that underlie depression by presenting the first broad boundary causal loop diagram of depression dynamics. We applied qualitative system dynamics methods to map the broad feedback mechanisms of depression. We used a structured approach to identify candidate causal mechanisms of depression in the literature. We assessed the strength of empirical support for each mechanism and prioritized those with support from validation studies. Through an iterative process, we synthesized the empirical literature and created a conceptual model of major depressive disorder. The literature review and synthesis resulted in the development of the first causal loop diagram of reinforcing feedback processes of depression. It proposes candidate drivers of illness, or inertial factors, and their temporal functioning, as well as the interactions among drivers of depression. The final causal loop diagram defines 13 key reinforcing feedback loops that involve nine candidate drivers of depression. Future research is needed to expand upon this initial model of depression dynamics. Quantitative extensions may result in a better understanding of the systemic syndrome of depression and contribute to personalized methods of evaluation, prevention and intervention.
Depression as a systemic syndrome: mapping the feedback loops of major depressive disorder
Wittenborn, A. K.; Rahmandad, H.; Rick, J.; Hosseinichimeh, N.
2016-01-01
Background Depression is a complex public health problem with considerable variation in treatment response. The systemic complexity of depression, or the feedback processes among diverse drivers of the disorder, contribute to the persistence of depression. This paper extends prior attempts to understand the complex causal feedback mechanisms that underlie depression by presenting the first broad boundary causal loop diagram of depression dynamics. Method We applied qualitative system dynamics methods to map the broad feedback mechanisms of depression. We used a structured approach to identify candidate causal mechanisms of depression in the literature. We assessed the strength of empirical support for each mechanism and prioritized those with support from validation studies. Through an iterative process, we synthesized the empirical literature and created a conceptual model of major depressive disorder. Results The literature review and synthesis resulted in the development of the first causal loop diagram of reinforcing feedback processes of depression. It proposes candidate drivers of illness, or inertial factors, and their temporal functioning, as well as the interactions among drivers of depression. The final causal loop diagram defines 13 key reinforcing feedback loops that involve nine candidate drivers of depression. Conclusions Future research is needed to expand upon this initial model of depression dynamics. Quantitative extensions may result in a better understanding of the systemic syndrome of depression and contribute to personalized methods of evaluation, prevention and intervention. PMID:26621339
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
CORSICA modelling of ITER hybrid operation scenarios
NASA Astrophysics Data System (ADS)
Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.
2016-12-01
The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.
Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations
NASA Technical Reports Server (NTRS)
Schuler, A. L.; Wiley, D. R.
1989-01-01
Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.
Flexible all-carbon photovoltaics with improved thermal stability
NASA Astrophysics Data System (ADS)
Tang, Chun; Ishihara, Hidetaka; Sodhi, Jaskiranjeet; Chen, Yen-Chang; Siordia, Andrew; Martini, Ashlie; Tung, Vincent C.
2015-04-01
The structurally robust nature of nanocarbon allotropes, e.g., semiconducting single-walled carbon nanotubes (SWCNTs) and C60s, makes them tantalizing candidates for thermally stable and mechanically flexible photovoltaic applications. However, C60s rapidly dissociate away from the basal of SWCNTs under thermal stimuli as a result of weak intermolecular forces that "lock up" the binary assemblies. Here, we explore use of graphene nanoribbons (GNRs) as geometrically tailored protecting layers to suppress the unwanted dissociation of C60s. The underlying mechanisms are explained using a combination of molecular dynamics simulations and transition state theory, revealing the temperature dependent disassociation of C60s from the SWCNT basal plane. Our strategy provides fundamental guidelines for integrating all-carbon based nano-p/n junctions with optimized structural and thermal stability. External quantum efficiency and output current-voltage characteristics are used to experimentally quantify the effectiveness of GNR membranes under high temperature annealing. Further, the resulting C60:SWCNT:GNR ternary composites display excellent mechanical stability, even after iterative bending tests.
IADE: a system for intelligent automatic design of bioisosteric analogs
NASA Astrophysics Data System (ADS)
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.
IADE: a system for intelligent automatic design of bioisosteric analogs.
Ertl, Peter; Lewis, Richard
2012-11-01
IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor--an analog that has won a recent "Design a Molecule" competition.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Saver, Jeffrey L; Warach, Steven; Janis, Scott; Odenkirchen, Joanne; Becker, Kyra; Benavente, Oscar; Broderick, Joseph; Dromerick, Alexander W; Duncan, Pamela; Elkind, Mitchell S V; Johnston, Karen; Kidwell, Chelsea S; Meschia, James F; Schwamm, Lee
2012-04-01
The National Institute of Neurological Disorders and Stroke initiated development of stroke-specific Common Data Elements (CDEs) as part of a project to develop data standards for funded clinical research in all fields of neuroscience. Standardizing data elements in translational, clinical, and population research in cerebrovascular disease could decrease study start-up time, facilitate data sharing, and promote well-informed clinical practice guidelines. A working group of diverse experts in cerebrovascular clinical trials, epidemiology, and biostatistics met regularly to develop a set of stroke CDEs, selecting among, refining, and adding to existing, field-tested data elements from national registries and funded trials and studies. Candidate elements were revised on the basis of comments from leading national and international neurovascular research organizations and the public. The first iteration of the National Institute of Neurological Disorders and Stroke (NINDS) stroke-specific CDEs comprises 980 data elements spanning 9 content areas: (1) biospecimens and biomarkers; (2) hospital course and acute therapies; (3) imaging; (4) laboratory tests and vital signs; (5) long-term therapies; (6) medical history and prior health status; (7) outcomes and end points; (8) stroke presentation; and (9) stroke types and subtypes. A CDE website provides uniform names and structures for each element, a data dictionary, and template case report forms, using the CDEs. Stroke-specific CDEs are now available as standardized, scientifically vetted, variable structures to facilitate data collection and data sharing in cerebrovascular patient-oriented research. The CDEs are an evolving resource that will be iteratively improved based on investigator use, new technologies, and emerging concepts and research findings.
Iterative cross section sequence graph for handwritten character segmentation.
Dawoud, Amer
2007-08-01
The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.
Improved interpretation of satellite altimeter data using genetic algorithms
NASA Technical Reports Server (NTRS)
Messa, Kenneth; Lybanon, Matthew
1992-01-01
Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.
Fragment-based drug discovery using rational design.
Jhoti, H
2007-01-01
Fragment-based drug discovery (FBDD) is established as an alternative approach to high-throughput screening for generating novel small molecule drug candidates. In FBDD, relatively small libraries of low molecular weight compounds (or fragments) are screened using sensitive biophysical techniques to detect their binding to the target protein. A lower absolute affinity of binding is expected from fragments, compared to much higher molecular weight hits detected by high-throughput screening, due to their reduced size and complexity. Through the use of iterative cycles of medicinal chemistry, ideally guided by three-dimensional structural data, it is often then relatively straightforward to optimize these weak binding fragment hits into potent and selective lead compounds. As with most other lead discovery methods there are two key components of FBDD; the detection technology and the compound library. In this review I outline the two main approaches used for detecting the binding of low affinity fragments and also some of the key principles that are used to generate a fragment library. In addition, I describe an example of how FBDD has led to the generation of a drug candidate that is now being tested in clinical trials for the treatment of cancer.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ren, Zhongzhou; Xu, Chang
2018-07-01
Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.
NASA Astrophysics Data System (ADS)
Storti, Mario A.; Nigro, Norberto M.; Paz, Rodrigo R.; Dalcín, Lisandro D.
2009-03-01
In this paper some results on the convergence of the Gauss-Seidel iteration when solving fluid/structure interaction problems with strong coupling via fixed point iteration are presented. The flow-induced vibration of a flat plate aligned with the flow direction at supersonic Mach number is studied. The precision of different predictor schemes and the influence of the partitioned strong coupling on stability is discussed.
Selective host molecules obtained by dynamic adaptive chemistry.
Matache, Mihaela; Bogdan, Elena; Hădade, Niculina D
2014-02-17
Up till 20 years ago, in order to endow molecules with function there were two mainstream lines of thought. One was to rationally design the positioning of chemical functionalities within candidate molecules, followed by an iterative synthesis-optimization process. The second was the use of a "brutal force" approach of combinatorial chemistry coupled with advanced screening for function. Although both methods provided important results, "rational design" often resulted in time-consuming efforts of modeling and synthesis only to find that the candidate molecule was not performing the designed job. "Combinatorial chemistry" suffered from a fundamental limitation related to the focusing of the libraries employed, often using lead compounds that limit its scope. Dynamic constitutional chemistry has developed as a combination of the two approaches above. Through the rational use of reversible chemical bonds together with a large plethora of precursor libraries, one is now able to build functional structures, ranging from quite simple molecules up to large polymeric structures. Thus, by introduction of the dynamic component within the molecular recognition processes, a new perspective of deciphering the world of the molecular events has aroused together with a new field of chemistry. Since its birth dynamic constitutional chemistry has continuously gained attention, in particular due to its ability to easily create from scratch outstanding molecular structures as well as the addition of adaptive features. The fundamental concepts defining the dynamic constitutional chemistry have been continuously extended to currently place it at the intersection between the supramolecular chemistry and newly defined adaptive chemistry, a pivotal feature towards evolutive chemistry. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitasaka, Takayuki; Oda, Masahiro; Mori, Kensaku
2017-03-01
Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining an integrated 3-D airway tree structure from a CT volume is a quite challenging task. This paper presents a novel airway segmentation method based on intensity structure analysis and bronchi shape structure analysis in volume of interest (VOI). This method segments the bronchial regions by applying the cavity enhancement filter (CEF) to trace the bronchial tree structure from the trachea. It uses the CEF in each VOI to segment each branch and to predict the positions of VOIs which envelope the bronchial regions in next level. At the same time, a leakage detection is performed to avoid the leakage by analysing the pixel information and the shape information of airway candidate regions extracted in the VOI. Bronchial regions are finally obtained by unifying the extracted airway regions. The experiments results showed that the proposed method can extract most of the bronchial region in each VOI and led good results of the airway segmentation.
Using Minimum-Surface Bodies for Iteration Space Partitioning
NASA Technical Reports Server (NTRS)
Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)
2001-01-01
A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.
Electromagnetic Analysis of ITER Diagnostic Equatorial Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. Zhai, R. Feder, A. Brooks, M. Ulrickson, C.S. Pitcher and G.D. Loesser
2012-08-27
ITER diagnostic port plugs perform many functionsincluding structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to the plasma. The design of diagnostic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate responses of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs), Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Lee, Seung Yup; Skolnick, Jeffrey
2007-07-01
To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.
Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost
Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor
2009-01-01
Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321
Large Deviations and Quasipotential for Finite State Mean Field Interacting Particle Systems
2014-05-01
The conclusion then follows by applying Lemma 4.4.2. 132 119 4.4.1 Iterative solver: The widest neighborhood structure We employ Gauss - Seidel ...nearest neighborhood structure described in Section 4.4.2. We use Gauss - Seidel iterative method for our numerical experiments. The Gauss - Seidel ...x ∈ Bh, M x ∈ Sh\\Bh, where M ∈ (V,∞) is a very large number, so that the iteration (4.5.1) converges quickly. For simplicity, we restrict our
Yoshida, Morikatsu; Utsunomiya, Daisuke; Kidoh, Masafumi; Yuki, Hideaki; Oda, Seitaro; Shiraishi, Shinya; Yamamoto, Hidekazu; Inomata, Yukihiro; Yamashita, Yasuyuki
2017-06-01
We evaluated whether donor computed tomography (CT) with a combined technique of lower tube voltage and iterative reconstruction (IR) can provide sufficient preoperative information for liver transplantation.We retrospectively reviewed CT of 113 liver donor candidates. Dynamic contrast-enhanced CT of the liver was performed on the following protocol: protocol A (n = 70), 120-kVp with filtered back projection (FBP); protocol B (n = 43), 100-kVp with IR. To equalize the background covariates, one-to-one propensity-matched analysis was used. We visually compared the score of the hepatic artery (A-score), portal vein (P-score), and hepatic vein (V-score) of the 2 protocols and quantitatively correlated the graft volume obtained by CT volumetry (graft-CTv) under the 2 protocols with the actual graft weight.In total, 39 protocol-A and protocol-B candidates showed comparable preoperative clinical characteristics with propensity matching. For protocols A and B, the A-score was 3.87 ± 0.73 and 4.51 ± 0.56 (P < .01), the P-score was 4.92 ± 0.27 and 5.0 ± 0.0 (P = .07), and the V-score was 4.23 ± 0.78 and 4.82 ± 0.39 (P < .01), respectively. Correlations between the actual graft weight and graft-CTv of protocols A and B were 0.97 and 0.96, respectively.Liver-donor CT imaging under 100-kVp plus IR protocol provides better visualization for vascular structures than that under 120-kVp plus FBP protocol with comparable accuracy for graft-CTv, while lowering radiation exposure by more than 40% and reducing contrast-medium dose by 20%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
Electromagnetic Analysis For The Design Of ITER Diagnostic Port Plugs During Plasma Disruptions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y
2014-03-03
ITER diagnostic port plugs perform many functions including structural support of diagnostic systems under high electromagnetic loads while allowing for diagnostic access to plasma. The design of diagnotic equatorial port plugs (EPP) are largely driven by electromagnetic loads and associate response of EPP structure during plasma disruptions and VDEs. This paper summarizes results of transient electromagnetic analysis using Opera 3d in support of the design activities for ITER diagnostic EPP. A complete distribution of disruption loads on the Diagnostic First Walls (DFWs). Diagnostic Shield Modules (DSMs) and the EPP structure, as well as impact on the system design integration duemore » to electrical contact among various EPP structural components are discussed.« less
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR
NASA Astrophysics Data System (ADS)
Yu, Guanying; Liu, Xufeng; Liu, Songlin
2016-10-01
The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)
Novel and Efficient Synthesis of the Promising Drug Candidate Discodermolide
2010-02-01
stereotriad building blocks for discodermolide and related polyketide antibiotics could be obtained from variations on a short, scalable scheme that did...chains required for the chemical synthesis of the nonaromatic polyketides is usually based on the iterative lengthening of an acyclic substituted chain...burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
NASA Astrophysics Data System (ADS)
Philipps, V.; Malaquias, A.; Hakola, A.; Karhunen, J.; Maddaluno, G.; Almaviva, S.; Caneve, L.; Colao, F.; Fortuna, E.; Gasior, P.; Kubkowska, M.; Czarnecka, A.; Laan, M.; Lissovski, A.; Paris, P.; van der Meiden, H. J.; Petersson, P.; Rubel, M.; Huber, A.; Zlobinski, M.; Schweer, B.; Gierse, N.; Xiao, Q.; Sergienko, G.
2013-09-01
Analysis and understanding of wall erosion, material transport and fuel retention are among the most important tasks for ITER and future devices, since these questions determine largely the lifetime and availability of the fusion reactor. These data are also of extreme value to improve the understanding and validate the models of the in vessel build-up of the T inventory in ITER and future D-T devices. So far, research in these areas is largely supported by post-mortem analysis of wall tiles. However, access to samples will be very much restricted in the next-generation devices (such as ITER, JT-60SA, W7-X, etc) with actively cooled plasma-facing components (PFC) and increasing duty cycle. This has motivated the development of methods to measure the deposition of material and retention of plasma fuel on the walls of fusion devices in situ, without removal of PFC samples. For this purpose, laser-based methods are the most promising candidates. Their feasibility has been assessed in a cooperative undertaking in various European associations under EFDA coordination. Different laser techniques have been explored both under laboratory and tokamak conditions with the emphasis to develop a conceptual design for a laser-based wall diagnostic which is integrated into an ITER port plug, aiming to characterize in situ relevant parts of the inner wall, the upper region of the inner divertor, part of the dome and the upper X-point region.
Development of tritium permeation barriers on Al base in Europe
NASA Astrophysics Data System (ADS)
Benamati, G.; Chabrol, C.; Perujo, A.; Rigal, E.; Glasbrenner, H.
The development of the water cooled lithium lead (WCLL) DEMO fusion reactor requires the production of a material capable of acting as a tritium permeation barrier (TPB). In the DEMO blanket reactor permeation barriers on the structural material are required to reduce the tritium permeation from the Pb-17Li or the plasma into the cooling water to acceptable levels (<1 g/d). Because of experimental work previously performed, one of the most promising TPB candidates is A1 base coatings. Within the EU a large R&D programme is in progress to develop a TPB fabrication technique, compatible with the structural materials requirements and capable of producing coatings with acceptable performances. The research is focused on chemical vapour deposition (CVD), hot dipping, hot isostatic pressing (HIP) technology and spray (this one developed also for repair) deposition techniques. The final goal is to select a reference technique to be used in the blanket of the DEMO reactor and in the ITER test module fabrication. The activities performed in four European laboratories are summarised here.
Discrete Self-Similarity in Interfacial Hydrodynamics and the Formation of Iterated Structures.
Dallaston, Michael C; Fontelos, Marco A; Tseluiko, Dmitri; Kalliadasis, Serafim
2018-01-19
The formation of iterated structures, such as satellite and subsatellite drops, filaments, and bubbles, is a common feature in interfacial hydrodynamics. Here we undertake a computational and theoretical study of their origin in the case of thin films of viscous fluids that are destabilized by long-range molecular or other forces. We demonstrate that iterated structures appear as a consequence of discrete self-similarity, where certain patterns repeat themselves, subject to rescaling, periodically in a logarithmic time scale. The result is an infinite sequence of ridges and filaments with similarity properties. The character of these discretely self-similar solutions as the result of a Hopf bifurcation from ordinarily self-similar solutions is also described.
GEM detector development for tokamak plasma radiation diagnostics: SXR poloidal tomography
NASA Astrophysics Data System (ADS)
Chernyshova, Maryna; Malinowski, Karol; Ziółkowski, Adam; Kowalska-Strzeciwilk, Ewa; Czarski, Tomasz; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Kolasiński, Piotr; Krawczyk, Rafał D.
2015-09-01
An increased attention to tungsten material is related to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. The proposed work refers to the studies of W influence on the plasma performances by developing new detectors based on Gas Electron Multiplier GEM) technology for tomographic studies of tungsten transport in ITER-oriented tokamaks, e.g. WEST project. It presents current stage of design and developing of cylindrically bent SXR GEM detector construction for horizontal port implementation. Concept to overcome an influence of constraints on vertical port has been also presented. It is expected that the detecting unit under development, when implemented, will add to the safe operation of tokamak bringing creation of sustainable nuclear fusion reactors a step closer.
Loss, Leandro A.; Bebis, George; Parvin, Bahram
2012-01-01
In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is applied to delineation of adherens junctions imaged through fluorescence microscopy. This class of membrane-bound macromolecules maintains tissue structural integrity and cell-cell interactions. Visually, it exhibits fibrous patterns that may be diffused, punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates. PMID:21421432
NASA Astrophysics Data System (ADS)
Krishnan, M.; Bhowmik, B.; Tiwari, A. K.; Hazra, B.
2017-08-01
In this paper, a novel baseline free approach for continuous online damage detection of multi degree of freedom vibrating structures using recursive principal component analysis (RPCA) in conjunction with online damage indicators is proposed. In this method, the acceleration data is used to obtain recursive proper orthogonal modes in online using the rank-one perturbation method, and subsequently utilized to detect the change in the dynamic behavior of the vibrating system from its pristine state to contiguous linear/nonlinear-states that indicate damage. The RPCA algorithm iterates the eigenvector and eigenvalue estimates for sample covariance matrices and new data point at each successive time instants, using the rank-one perturbation method. An online condition indicator (CI) based on the L2 norm of the error between actual response and the response projected using recursive eigenvector matrix updates over successive iterations is proposed. This eliminates the need for offline post processing and facilitates online damage detection especially when applied to streaming data. The proposed CI, named recursive residual error, is also adopted for simultaneous spatio-temporal damage detection. Numerical simulations performed on five-degree of freedom nonlinear system under white noise and El Centro excitations, with different levels of nonlinearity simulating the damage scenarios, demonstrate the robustness of the proposed algorithm. Successful results obtained from practical case studies involving experiments performed on a cantilever beam subjected to earthquake excitation, for full sensors and underdetermined cases; and data from recorded responses of the UCLA Factor building (full data and its subset) demonstrate the efficacy of the proposed methodology as an ideal candidate for real-time, reference free structural health monitoring.
Stability of the iterative solutions of integral equations as one phase freezing criterion.
Fantoni, R; Pastore, G
2003-10-01
A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
SUMMARY REPORT-FY2006 ITER WORK ACCOMPLISHED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N N
2006-04-11
Six parties (EU, Japan, Russia, US, Korea, China) will build ITER. The US proposed to deliver at least 4 out of 7 modules of the Central Solenoid. Phillip Michael (MIT) and I were tasked by DoE to assist ITER in development of the ITER CS and other magnet systems. We work to help Magnets and Structure division headed by Neil Mitchell. During this visit I worked on the selected items of the CS design and carried out other small tasks, like PF temperature margin assessment.
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.
Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet
2018-04-23
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.
A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm
Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A.; Ravankar, Abhijeet
2018-01-01
In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. PMID:29690624
Improving marine disease surveillance through sea temperature monitoring, outlooks and projections
Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C. Drew; Eakin, C. Mark; Liu, Gang; Willis, Bette L.; Williams, Gareth J.; Dobson, Andrew; Heron, Scott F.; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D.
2016-01-01
To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host–pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12°C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12°C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. PMID:26880840
Improving marine disease surveillance through sea temperature monitoring, outlooks and projections.
Maynard, Jeffrey; van Hooidonk, Ruben; Harvell, C Drew; Eakin, C Mark; Liu, Gang; Willis, Bette L; Williams, Gareth J; Groner, Maya L; Dobson, Andrew; Heron, Scott F; Glenn, Robert; Reardon, Kathleen; Shields, Jeffrey D
2016-03-05
To forecast marine disease outbreaks as oceans warm requires new environmental surveillance tools. We describe an iterative process for developing these tools that combines research, development and deployment for suitable systems. The first step is to identify candidate host-pathogen systems. The 24 candidate systems we identified include sponges, corals, oysters, crustaceans, sea stars, fishes and sea grasses (among others). To illustrate the other steps, we present a case study of epizootic shell disease (ESD) in the American lobster. Increasing prevalence of ESD is a contributing factor to lobster fishery collapse in southern New England (SNE), raising concerns that disease prevalence will increase in the northern Gulf of Maine under climate change. The lowest maximum bottom temperature associated with ESD prevalence in SNE is 12 °C. Our seasonal outlook for 2015 and long-term projections show bottom temperatures greater than or equal to 12 °C may occur in this and coming years in the coastal bays of Maine. The tools presented will allow managers to target efforts to monitor the effects of ESD on fishery sustainability and will be iteratively refined. The approach and case example highlight that temperature-based surveillance tools can inform research, monitoring and management of emerging and continuing marine disease threats. © 2016 The Authors.
The ITER project construction status
NASA Astrophysics Data System (ADS)
Motojima, O.
2015-10-01
The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.
NASA Astrophysics Data System (ADS)
Li, Jia; Wang, Qiang; Yan, Wenjie; Shen, Yi
2015-12-01
Cooperative spectrum sensing exploits the spatial diversity to improve the detection of occupied channels in cognitive radio networks (CRNs). Cooperative compressive spectrum sensing (CCSS) utilizing the sparsity of channel occupancy further improves the efficiency by reducing the number of reports without degrading detection performance. In this paper, we firstly and mainly propose the referred multi-candidate orthogonal matrix matching pursuit (MOMMP) algorithms to efficiently and effectively detect occupied channels at fusion center (FC), where multi-candidate identification and orthogonal projection are utilized to respectively reduce the number of required iterations and improve the probability of exact identification. Secondly, two common but different approaches based on threshold and Gaussian distribution are introduced to realize the multi-candidate identification. Moreover, to improve the detection accuracy and energy efficiency, we propose the matrix construction based on shrinkage and gradient descent (MCSGD) algorithm to provide a deterministic filter coefficient matrix of low t-average coherence. Finally, several numerical simulations validate that our proposals provide satisfactory performance with higher probability of detection, lower probability of false alarm and less detection time.
Nilsson, Ola B; Adedoyin, Justus; Rhyner, Claudio; Neimert-Andersson, Theresa; Grundström, Jeanette; Berndt, Kurt D; Crameri, Reto; Grönlund, Hans
2011-01-01
Allergy and asthma to cat (Felis domesticus) affects about 10% of the population in affluent countries. Immediate allergic symptoms are primarily mediated via IgE antibodies binding to B cell epitopes, whereas late phase inflammatory reactions are mediated via activated T cell recognition of allergen-specific T cell epitopes. Allergen-specific immunotherapy relieves symptoms and is the only treatment inducing a long-lasting protection by induction of protective immune responses. The aim of this study was to produce an allergy vaccine designed with the combined features of attenuated T cell activation, reduced anaphylactic properties, retained molecular integrity and induction of efficient IgE blocking IgG antibodies for safer and efficacious treatment of patients with allergy and asthma to cat. The template gene coding for rFel d 1 was used to introduce random mutations, which was subsequently expressed in large phage libraries. Despite accumulated mutations by up to 7 rounds of iterative error-prone PCR and biopanning, surface topology and structure was essentially maintained using IgE-antibodies from cat allergic patients for phage enrichment. Four candidates were isolated, displaying similar or lower IgE binding, reduced anaphylactic activity as measured by their capacity to induce basophil degranulation and, importantly, a significantly lower T cell reactivity in lymphoproliferative assays compared to the original rFel d 1. In addition, all mutants showed ability to induce blocking antibodies in immunized mice.The approach presented here provides a straightforward procedure to generate a novel type of allergy vaccines for safer and efficacious treatment of allergic patients.
Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures.
Loss, Leandro A; Bebis, George; Parvin, Bahram
2011-08-01
In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao
2016-10-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
Minimizing Cache Misses Using Minimum-Surface Bodies
NASA Technical Reports Server (NTRS)
Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)
2002-01-01
A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.
Re-weldability tests of irradiated 316L(N) stainless steel using laser welding technique
NASA Astrophysics Data System (ADS)
Yamada, Hirokazu; Kawamura, Hiroshi; Tsuchiya, Kunihiko; Kalinin, George; Kohno, Wataru; Morishima, Yasuo
2002-12-01
SS316L(N)-IG is the candidate material for the in-vessel and ex-vessel components of fusion reactors such as ITER (International Thermonuclear Experimental Reactor). This paper describes a study on re-weldability of un-irradiated and/or irradiated SS316L(N)-IG and the effect of helium generation on the mechanical properties of the weld joint. The laser welding process is used for re-welding of the water cooling branch pipeline repairs. It is clarified that re-welding of SS316L(N)-IG irradiated up to about 0.2 dpa (3.3 appm He) can be carried out without a serious deterioration of tensile properties due to helium accumulation. Therefore, repair of the ITER blanket cooling pipes can be performed by the laser welding process.
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
NASA Technical Reports Server (NTRS)
Padovan, J.
1981-01-01
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.
NASA Astrophysics Data System (ADS)
Lou, Yang
Photoacoustic computed tomography(PACT), also known as optoacoustic tomography (OAT), is an emerging imaging technique that has developed rapidly in recent years. The combination of the high optical contrast and the high acoustic resolution of this hybrid imaging technique makes it a promising candidate for human breast imaging, where conventional imaging techniques including X-ray mammography, B-mode ultrasound, and MRI suffer from low contrast, low specificity for certain breast types, and additional risks related to ionizing radiation. Though significant works have been done to push the frontier of PACT breast imaging, it is still challenging to successfully build a PACT breast imaging system and apply it to wide clinical use because of various practical reasons. First, computer simulation studies are often conducted to guide imaging system designs, but the numerical phantoms employed in most previous works consist of simple geometries and do not reflect the true anatomical structures within the breast. Therefore the effectiveness of such simulation-guided PACT system in clinical experiments will be compromised. Second, it is challenging to design a system to simultaneously illuminate the entire breast with limited laser power. Some heuristic designs have been proposed where the illumination is non-stationary during the imaging procedure, but the impact of employing such a design has not been carefully studied. Third, current PACT imaging systems are often optimized with respect to physical measures such as resolution or signal-to-noise ratio (SNR). It would be desirable to establish an assessing framework where the detectability of breast tumor can be directly quantified, therefore the images produced by such optimized imaging systems are not only visually appealing, but most informative in terms of the tumor detection task. Fourth, when imaging a large three-dimensional (3D) object such as the breast, iterative reconstruction algorithms are often utilized to alleviate the need to collect densely sampled measurement data hence a long scanning time. However, the heavy computation burden associated with iterative algorithms largely hinders its application in PACT breast imaging. This dissertation is dedicated to address these aforementioned problems in PACT breast imaging. A method that generates anatomically realistic numerical breast phantoms is first proposed to facilitate computer simulation studies in PACT. The non-stationary illumination designs for PACT breast imaging are then systematically investigated in terms of its impact on reconstructed images. We then apply signal detection theory to assess different system designs to demonstrate how an objective, task-based measure can be established for PACT breast imaging. To address the slow computation time of iterative algorithms for PACT imaging, we propose an acceleration method that employs an approximated but much faster adjoint operator during iterations, which can reduce the computation time by a factor of six without significantly compromising image quality. Finally, some clinical results are presented to demonstrate that the PACT breast imaging can resolve most major and fine vascular structures within the breast, along with some pathological biomarkers that may indicate tumor development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongxing; Fang, Hengrui; Miller, Mitchell D.
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less
Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure
NASA Astrophysics Data System (ADS)
Bogani, C.; Gasparo, M. G.; Papini, A.
2009-07-01
We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.
Validation of the United States Marine Corps Qualified Candidate Population Model
2003-03-01
time. Fields are created in the database to support this forecasting. User forms and a macro are programmed in Microsoft VBA to develop the...at 0.001. To accomplish 50,000 iterations of a minimization problem, this study wrote a macro in the VBA programming language that guides the solver...success in the commissioning process. **To improve the diagnostics of this propensity model, other factors were considered as well. Applying SQL
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J
2015-10-01
To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.
The solution of linear systems of equations with a structural analysis code on the NAS CRAY-2
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Overman, Andrea L.
1988-01-01
Two methods for solving linear systems of equations on the NAS Cray-2 are described. One is a direct method; the other is an iterative method. Both methods exploit the architecture of the Cray-2, particularly the vectorization, and are aimed at structural analysis applications. To demonstrate and evaluate the methods, they were installed in a finite element structural analysis code denoted the Computational Structural Mechanics (CSM) Testbed. A description of the techniques used to integrate the two solvers into the Testbed is given. Storage schemes, memory requirements, operation counts, and reformatting procedures are discussed. Finally, results from the new methods are compared with results from the initial Testbed sparse Choleski equation solver for three structural analysis problems. The new direct solvers described achieve the highest computational rates of the methods compared. The new iterative methods are not able to achieve as high computation rates as the vectorized direct solvers but are best for well conditioned problems which require fewer iterations to converge to the solution.
Hirata, Kenichiro; Utsunomiya, Daisuke; Kidoh, Masafumi; Funama, Yoshinori; Oda, Seitaro; Yuki, Hideaki; Nagayama, Yasunori; Iyama, Yuji; Nakaura, Takeshi; Sakabe, Daisuke; Tsujita, Kenichi; Yamashita, Yasuyuki
2018-05-01
We aimed to evaluate the image quality performance of coronary CT angiography (CTA) under the different settings of forward-projected model-based iterative reconstruction solutions (FIRST).Thirty patients undergoing coronary CTA were included. Each image was reconstructed using filtered back projection (FBP), adaptive iterative dose reduction 3D (AIDR-3D), and 2 model-based iterative reconstructions including FIRST-body and FIRST-cardiac sharp (CS). CT number and noise were measured in the coronary vessels and plaque. Subjective image-quality scores were obtained for noise and structure visibility.In the objective image analysis, FIRST-body produced the significantly highest contrast-to-noise ratio. Regarding subjective image quality, FIRST-CS had the highest score for structure visibility, although the image noise score was inferior to that of FIRST-body.In conclusion, FIRST provides significant improvements in objective and subjective image quality compared with FBP and AIDR-3D. FIRST-body effectively reduces image noise, but the structure visibility with FIRST-CS was superior to FIRST-body.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
Pilot Deployment of the LDSD Parachute via a Supersonic Ballute
NASA Technical Reports Server (NTRS)
Tanner, Christopher L.; O'Farrell, Clara; Gallon, John C.; Clark, Ian G.; Witkowski, Allen; Woodruff, Paul
2015-01-01
The Low Density Supersonic Decelerator (LDSD) Project required the use of a pilot system due to the inability to mortar deploy its main supersonic parachute. A mortar deployed 4.4 m diameter supersonic ram-air ballute was selected as the pilot system for its high drag coefficient and stability relative to candidate supersonic parachutes at the targeted operational Mach number of 3. The ballute underwent a significant development program that included the development of a new liquid methanol-based pre-inflation system to assist the ballute inflation process. Both pneumatic and pyrotechnic mortar tests were conducted to verify orderly rigging deployment, bag strip, inflation aid activation, and proper mortar performance. The ballute was iteratively analyzed between fluid and structural analysis codes to obtain aerodynamic and aerothermodynamic estimates as well as estimates of the ballute's structural integrity and shape. The ballute was successfully flown in June 2014 at a Mach number of 2.73 as part of the first LDSD supersonic flight test and performed beyond expectations. Recovery of the ballute indicated that it did not exceed its structural or thermal capabilities. This flight set a historical precedent as it represented the largest ballute to have ever been successfully flown at this Mach number by a NASA entity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, C.
This report contains papers on the following topics: conceptual design; radiation damage of ITER magnet systems; insulation system of the magnets; critical current density and strain sensitivity; toroidal field coil structural analysis; stress analysis for the ITER central solenoid; and volt-second capabilities and PF magnet configurations.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Overview of the preliminary design of the ITER plasma control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snipes, J. A.; Albanese, R.; Ambrosino, G.
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
Overview of the preliminary design of the ITER plasma control system
NASA Astrophysics Data System (ADS)
Snipes, J. A.; Albanese, R.; Ambrosino, G.; Ambrosino, R.; Amoskov, V.; Blanken, T. C.; Bremond, S.; Cinque, M.; de Tommasi, G.; de Vries, P. C.; Eidietis, N.; Felici, F.; Felton, R.; Ferron, J.; Formisano, A.; Gribov, Y.; Hosokawa, M.; Hyatt, A.; Humphreys, D.; Jackson, G.; Kavin, A.; Khayrutdinov, R.; Kim, D.; Kim, S. H.; Konovalov, S.; Lamzin, E.; Lehnen, M.; Lukash, V.; Lomas, P.; Mattei, M.; Mineev, A.; Moreau, P.; Neu, G.; Nouailletas, R.; Pautasso, G.; Pironti, A.; Rapson, C.; Raupp, G.; Ravensbergen, T.; Rimini, F.; Schneider, M.; Travere, J.-M.; Treutterer, W.; Villone, F.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.
2017-12-01
An overview of the preliminary design of the ITER plasma control system (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemes for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.
Overview of the preliminary design of the ITER plasma control system
Snipes, J. A.; Albanese, R.; Ambrosino, G.; ...
2017-09-11
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
de Oliveira, Tiago E.; Netz, Paulo A.; Kremer, Kurt; ...
2016-05-03
We present a coarse-graining strategy that we test for aqueous mixtures. The method uses pair-wise cumulative coordination as a target function within an iterative Boltzmann inversion (IBI) like protocol. We name this method coordination iterative Boltzmann inversion (C–IBI). While the underlying coarse-grained model is still structure based and, thus, preserves pair-wise solution structure, our method also reproduces solvation thermodynamics of binary and/or ternary mixtures. In addition, we observe much faster convergence within C–IBI compared to IBI. To validate the robustness, we apply C–IBI to study test cases of solvation thermodynamics of aqueous urea and a triglycine solvation in aqueous urea.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Vadolia, Gautam R.; Premjit Singh, K.
2017-04-01
Electron Beam Welding (EBW) technology is an established and widely adopted technique in nuclear research and development area. Electron beam welding was thought of as a candidate process for ITER Vacuum Vessel Fabrication. Dhruva Reactor at BARC, Mumbai and Niobium superconducting accelerator cavity at BARC has adopted the EB welding technique as a fabrication route. Study of process capability and limitations based on available literature is consolidated in this short review paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marian, Jaime; Becquart, Charlotte S.; Domain, Christophe
2017-06-09
Under the anticipated operating conditions for demonstration magnetic fusion reactors beyond ITER, structural materials will be exposed to unprecedented conditions of irradiation, heat flux, and temperature. While such extreme environments remain inaccessible experimentally, computational modeling and simulation can provide qualitative and quantitative insights into materials response and complement the available experimental measurements with carefully validated predictions. For plasma facing components such as the first wall and the divertor, tungsten (W) has been selected as the best candidate material due to its superior high-temperature and irradiation properties. In this paper we provide a review of recent efforts in computational modeling ofmore » W both as a plasma-facing material exposed to He deposition as well as a bulk structural material subjected to fast neutron irradiation. We use a multiscale modeling approach –commonly used as the materials modeling paradigm– to define the outline of the paper and highlight recent advances using several classes of techniques and their interconnection. We highlight several of the most salient findings obtained via computational modeling and point out a number of remaining challenges and future research directions« less
Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue
Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan
2015-01-01
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987
Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.
Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan
2015-01-01
Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.
GEM detectors development for radiation environment: neutron tests and simulations
NASA Astrophysics Data System (ADS)
Chernyshova, Maryna; Jednoróg, Sławomir; Malinowski, Karol; Czarski, Tomasz; Ziółkowski, Adam; Bieńkowska, Barbara; Prokopowicz, Rafał; Łaszyńska, Ewa; Kowalska-Strzeciwilk, Ewa; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Krawczyk, Rafał D.; Linczuk, Paweł; Potrykus, Paweł; Bajdel, Barcel
2016-09-01
One of the requests from the ongoing ITER-Like Wall Project is to have diagnostics for Soft X-Ray (SXR) monitoring in tokamak. Such diagnostics should be focused on tungsten emission measurements, as an increased attention is currently paid to tungsten due to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. In addition, such diagnostics should be able to withstand harsh radiation environment at tokamak during its operation. The presented work is related to the development of such diagnostics based on Gas Electron Multiplier (GEM) technology. More specifically, an influence of neutron radiation on performance of the GEM detectors is studied both experimentally and through computer simulations. The neutron induced radioactivity (after neutron source exposure) was found to be not pronounced comparing to an impact of other secondary neutron reaction products (during the exposure).
Progress in Development of the ITER Plasma Control System Simulation Platform
NASA Astrophysics Data System (ADS)
Walker, Michael; Humphreys, David; Sammuli, Brian; Ambrosino, Giuseppe; de Tommasi, Gianmaria; Mattei, Massimiliano; Raupp, Gerhard; Treutterer, Wolfgang; Winter, Axel
2017-10-01
We report on progress made and expected uses of the Plasma Control System Simulation Platform (PCSSP), the primary test environment for development of the ITER Plasma Control System (PCS). PCSSP will be used for verification and validation of the ITER PCS Final Design for First Plasma, to be completed in 2020. We discuss the objectives of PCSSP, its overall structure, selected features, application to existing devices, and expected evolution over the lifetime of the ITER PCS. We describe an archiving solution for simulation results, methods for incorporating physics models of the plasma and physical plant (tokamak, actuator, and diagnostic systems) into PCSSP, and defining characteristics of models suitable for a plasma control development environment such as PCSSP. Applications of PCSSP simulation models including resistive plasma equilibrium evolution are demonstrated. PCSSP development supported by ITER Organization under ITER/CTS/6000000037. Resistive evolution code developed under General Atomics' Internal funding. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
NASA Astrophysics Data System (ADS)
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. Dale, Jr.
1990-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number of operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
High-performance equation solvers and their impact on finite element analysis
NASA Technical Reports Server (NTRS)
Poole, Eugene L.; Knight, Norman F., Jr.; Davis, D. D., Jr.
1992-01-01
The role of equation solvers in modern structural analysis software is described. Direct and iterative equation solvers which exploit vectorization on modern high-performance computer systems are described and compared. The direct solvers are two Cholesky factorization methods. The first method utilizes a novel variable-band data storage format to achieve very high computation rates and the second method uses a sparse data storage format designed to reduce the number od operations. The iterative solvers are preconditioned conjugate gradient methods. Two different preconditioners are included; the first uses a diagonal matrix storage scheme to achieve high computation rates and the second requires a sparse data storage scheme and converges to the solution in fewer iterations that the first. The impact of using all of the equation solvers in a common structural analysis software system is demonstrated by solving several representative structural analysis problems.
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1990-01-01
Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.
The Role of Combined ICRF and NBI Heating in JET Hybrid Plasmas in Quest for High D-T Fusion Yield
NASA Astrophysics Data System (ADS)
Mantsinen, Mervi; Challis, Clive; Frigione, Domenico; Graves, Jonathan; Hobirk, Joerg; Belonohy, Eva; Czarnecka, Agata; Eriksson, Jacob; Gallart, Dani; Goniche, Marc; Hellesen, Carl; Jacquet, Philippe; Joffrin, Emmanuel; King, Damian; Krawczyk, Natalia; Lennholm, Morten; Lerche, Ernesto; Pawelec, Ewa; Sips, George; Solano, Emilia R.; Tsalas, Maximos; Valisa, Marco
2017-10-01
Combined ICRF and NBI heating played a key role in achieving the world-record fusion yield in the first deuterium-tritium campaign at the JET tokamak in 1997. The current plans for JET include new experiments with deuterium-tritium (D-T) plasmas with more ITER-like conditions given the recently installed ITER-like wall (ILW). In the 2015-2016 campaigns, significant efforts have been devoted to the development of high-performance plasma scenarios compatible with ILW in preparation of the forthcoming D-T campaign. Good progress was made in both the inductive (baseline) and the hybrid scenario: a new record JET ILW fusion yield with a significantly extended duration of the high-performance phase was achieved. This paper reports on the progress with the hybrid scenario which is a candidate for ITER longpulse operation (˜1000 s) thanks to its improved normalized confinement, reduced plasma current and higher plasma beta with respect to the ITER reference baseline scenario. The combined NBI+ICRF power in the hybrid scenario was increased to 33 MW and the record fusion yield, averaged over 100 ms, to 2.9x1016 neutrons/s from the 2014 ILW fusion record of 2.3x1016 neutrons/s. Impurity control with ICRF waves was one of the key means for extending the duration of the high-performance phase. The main results are reviewed covering both key core and edge plasma issues.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
NASA Astrophysics Data System (ADS)
Akiba, Masato; Jitsukawa, Shiroh; Muroga, Takeo
This paper describes the status of blanket technology and material development for fusion power demonstration plants and commercial fusion plants. In particular, the ITER Test Blanket Module, IFMIF, JAERI/DOE HFIR and JUPITER-II projects are highlighted, which have the important role to develop these technology. The ITER Test Blanket Module project has been conducted to demonstrate tritium breeding and power generation using test blanket modules, which will be installed into the ITER facility. For structural material development, the present research status is overviewed on reduced activation ferritic steel, vanadium alloys, and SiC/SiC composites.
Iteration and Anxiety in Mathematical Literature
ERIC Educational Resources Information Center
Capezzi, Rita; Kinsey, L. Christine
2016-01-01
We describe our experiences in team-teaching an honors seminar on mathematics and literature. We focus particularly on two of the texts we read: Georges Perec's "How to Ask Your Boss for a Raise" and Alain Robbe-Grillet's "Jealousy," both of which make use of iterative structures.
NASA Technical Reports Server (NTRS)
Barry, William
2001-01-01
Dr. William Barry, Manager, NASA Occupational Health Program, moderated this session. As in one of the opening sessions, he re-iterated that the overall theme for the next year will be facilitating and implementing NIAT-1 (NASA Integrated Action Team - Action 1). He presented a candidate list of topics for consideration and discussion: (1) NIAT-1; (2) Skin cancer detection and the NASA Solar Safe Program; (3) Weapons of mass destruction; (4) Quality assurance; (5) Audits; (6) Environment of care; (7) Infection control; (8) Medication management; and (9) Confidentiality of medical records.
Ambiguity resolution in systems using Omega for position location
NASA Technical Reports Server (NTRS)
Frenkel, G.; Gan, D. G.
1974-01-01
The lane ambiguity problem prevents the utilization of the Omega system for many applications such as locating buoys and balloons. The method of multiple lines of position introduced herein uses signals from four or more Omega stations for ambiguity resolution. The coordinates of the candidate points are determined first through the use of the Newton iterative procedure. Subsequently, a likelihood function is generated for each point, and the ambiguity is resolved by selecting the most likely point. The method was tested through simulation.
An overview of NSPCG: A nonsymmetric preconditioned conjugate gradient package
NASA Astrophysics Data System (ADS)
Oppe, Thomas C.; Joubert, Wayne D.; Kincaid, David R.
1989-05-01
The most recent research-oriented software package developed as part of the ITPACK Project is called "NSPCG" since it contains many nonsymmetric preconditioned conjugate gradient procedures. It is designed to solve large sparse systems of linear algebraic equations by a variety of different iterative methods. One of the main purposes for the development of the package is to provide a common modular structure for research on iterative methods for nonsymmetric matrices. Another purpose for the development of the package is to investigate the suitability of several iterative methods for vector computers. Since the vectorizability of an iterative method depends greatly on the matrix structure, NSPCG allows great flexibility in the operator representation. The coefficient matrix can be passed in one of several different matrix data storage schemes. These sparse data formats allow matrices with a wide range of structures from highly structured ones such as those with all nonzeros along a relatively small number of diagonals to completely unstructured sparse matrices. Alternatively, the package allows the user to call the accelerators directly with user-supplied routines for performing certain matrix operations. In this case, one can use the data format from an application program and not be required to copy the matrix into one of the package formats. This is particularly advantageous when memory space is limited. Some of the basic preconditioners that are available are point methods such as Jacobi, Incomplete LU Decomposition and Symmetric Successive Overrelaxation as well as block and multicolor preconditioners. The user can select from a large collection of accelerators such as Conjugate Gradient (CG), Chebyshev (SI, for semi-iterative), Generalized Minimal Residual (GMRES), Biconjugate Gradient Squared (BCGS) and many others. The package is modular so that almost any accelerator can be used with almost any preconditioner.
Overview of the JET results in support to ITER
NASA Astrophysics Data System (ADS)
Litaudon, X.; Abduallev, S.; Abhangi, M.; Abreu, P.; Afzal, M.; Aggarwal, K. M.; Ahlgren, T.; Ahn, J. H.; Aho-Mantila, L.; Aiba, N.; Airila, M.; Albanese, R.; Aldred, V.; Alegre, D.; Alessi, E.; Aleynikov, P.; Alfier, A.; Alkseev, A.; Allinson, M.; Alper, B.; Alves, E.; Ambrosino, G.; Ambrosino, R.; Amicucci, L.; Amosov, V.; Andersson Sundén, E.; Angelone, M.; Anghel, M.; Angioni, C.; Appel, L.; Appelbee, C.; Arena, P.; Ariola, M.; Arnichand, H.; Arshad, S.; Ash, A.; Ashikawa, N.; Aslanyan, V.; Asunta, O.; Auriemma, F.; Austin, Y.; Avotina, L.; Axton, M. D.; Ayres, C.; Bacharis, M.; Baciero, A.; Baião, D.; Bailey, S.; Baker, A.; Balboa, I.; Balden, M.; Balshaw, N.; Bament, R.; Banks, J. W.; Baranov, Y. F.; Barnard, M. A.; Barnes, D.; Barnes, M.; Barnsley, R.; Baron Wiechec, A.; Barrera Orte, L.; Baruzzo, M.; Basiuk, V.; Bassan, M.; Bastow, R.; Batista, A.; Batistoni, P.; Baughan, R.; Bauvir, B.; Baylor, L.; Bazylev, B.; Beal, J.; Beaumont, P. S.; Beckers, M.; Beckett, B.; Becoulet, A.; Bekris, N.; Beldishevski, M.; Bell, K.; Belli, F.; Bellinger, M.; Belonohy, É.; Ben Ayed, N.; Benterman, N. A.; Bergsåker, H.; Bernardo, J.; Bernert, M.; Berry, M.; Bertalot, L.; Besliu, C.; Beurskens, M.; Bieg, B.; Bielecki, J.; Biewer, T.; Bigi, M.; Bílková, P.; Binda, F.; Bisoffi, A.; Bizarro, J. P. S.; Björkas, C.; Blackburn, J.; Blackman, K.; Blackman, T. R.; Blanchard, P.; Blatchford, P.; Bobkov, V.; Boboc, A.; Bodnár, G.; Bogar, O.; Bolshakova, I.; Bolzonella, T.; Bonanomi, N.; Bonelli, F.; Boom, J.; Booth, J.; Borba, D.; Borodin, D.; Borodkina, I.; Botrugno, A.; Bottereau, C.; Boulting, P.; Bourdelle, C.; Bowden, M.; Bower, C.; Bowman, C.; Boyce, T.; Boyd, C.; Boyer, H. J.; Bradshaw, J. M. A.; Braic, V.; Bravanec, R.; Breizman, B.; Bremond, S.; Brennan, P. D.; Breton, S.; Brett, A.; Brezinsek, S.; Bright, M. D. J.; Brix, M.; Broeckx, W.; Brombin, M.; Brosławski, A.; Brown, D. P. D.; Brown, M.; Bruno, E.; Bucalossi, J.; Buch, J.; Buchanan, J.; Buckley, M. A.; Budny, R.; Bufferand, H.; Bulman, M.; Bulmer, N.; Bunting, P.; Buratti, P.; Burckhart, A.; Buscarino, A.; Busse, A.; Butler, N. K.; Bykov, I.; Byrne, J.; Cahyna, P.; Calabrò, G.; Calvo, I.; Camenen, Y.; Camp, P.; Campling, D. C.; Cane, J.; Cannas, B.; Capel, A. J.; Card, P. J.; Cardinali, A.; Carman, P.; Carr, M.; Carralero, D.; Carraro, L.; Carvalho, B. B.; Carvalho, I.; Carvalho, P.; Casson, F. J.; Castaldo, C.; Catarino, N.; Caumont, J.; Causa, F.; Cavazzana, R.; Cave-Ayland, K.; Cavinato, M.; Cecconello, M.; Ceccuzzi, S.; Cecil, E.; Cenedese, A.; Cesario, R.; Challis, C. D.; Chandler, M.; Chandra, D.; Chang, C. S.; Chankin, A.; Chapman, I. T.; Chapman, S. C.; Chernyshova, M.; Chitarin, G.; Ciraolo, G.; Ciric, D.; Citrin, J.; Clairet, F.; Clark, E.; Clark, M.; Clarkson, R.; Clatworthy, D.; Clements, C.; Cleverly, M.; Coad, J. P.; Coates, P. A.; Cobalt, A.; Coccorese, V.; Cocilovo, V.; Coda, S.; Coelho, R.; Coenen, J. W.; Coffey, I.; Colas, L.; Collins, S.; Conka, D.; Conroy, S.; Conway, N.; Coombs, D.; Cooper, D.; Cooper, S. R.; Corradino, C.; Corre, Y.; Corrigan, G.; Cortes, S.; Coster, D.; Couchman, A. S.; Cox, M. P.; Craciunescu, T.; Cramp, S.; Craven, R.; Crisanti, F.; Croci, G.; Croft, D.; Crombé, K.; Crowe, R.; Cruz, N.; Cseh, G.; Cufar, A.; Cullen, A.; Curuia, M.; Czarnecka, A.; Dabirikhah, H.; Dalgliesh, P.; Dalley, S.; Dankowski, J.; Darrow, D.; Davies, O.; Davis, W.; Day, C.; Day, I. E.; De Bock, M.; de Castro, A.; de la Cal, E.; de la Luna, E.; De Masi, G.; de Pablos, J. L.; De Temmerman, G.; De Tommasi, G.; de Vries, P.; Deakin, K.; Deane, J.; Degli Agostini, F.; Dejarnac, R.; Delabie, E.; den Harder, N.; Dendy, R. O.; Denis, J.; Denner, P.; Devaux, S.; Devynck, P.; Di Maio, F.; Di Siena, A.; Di Troia, C.; Dinca, P.; D'Inca, R.; Ding, B.; Dittmar, T.; Doerk, H.; Doerner, R. P.; Donné, T.; Dorling, S. E.; Dormido-Canto, S.; Doswon, S.; Douai, D.; Doyle, P. T.; Drenik, A.; Drewelow, P.; Drews, P.; Duckworth, Ph.; Dumont, R.; Dumortier, P.; Dunai, D.; Dunne, M.; Ďuran, I.; Durodié, F.; Dutta, P.; Duval, B. P.; Dux, R.; Dylst, K.; Dzysiuk, N.; Edappala, P. V.; Edmond, J.; Edwards, A. M.; Edwards, J.; Eich, Th.; Ekedahl, A.; El-Jorf, R.; Elsmore, C. G.; Enachescu, M.; Ericsson, G.; Eriksson, F.; Eriksson, J.; Eriksson, L. G.; Esposito, B.; Esquembri, S.; Esser, H. G.; Esteve, D.; Evans, B.; Evans, G. E.; Evison, G.; Ewart, G. D.; Fagan, D.; Faitsch, M.; Falie, D.; Fanni, A.; Fasoli, A.; Faustin, J. M.; Fawlk, N.; Fazendeiro, L.; Fedorczak, N.; Felton, R. C.; Fenton, K.; Fernades, A.; Fernandes, H.; Ferreira, J.; Fessey, J. A.; Février, O.; Ficker, O.; Field, A.; Fietz, S.; Figueiredo, A.; Figueiredo, J.; Fil, A.; Finburg, P.; Firdaouss, M.; Fischer, U.; Fittill, L.; Fitzgerald, M.; Flammini, D.; Flanagan, J.; Fleming, C.; Flinders, K.; Fonnesu, N.; Fontdecaba, J. M.; Formisano, A.; Forsythe, L.; Fortuna, L.; Fortuna-Zalesna, E.; Fortune, M.; Foster, S.; Franke, T.; Franklin, T.; Frasca, M.; Frassinetti, L.; Freisinger, M.; Fresa, R.; Frigione, D.; Fuchs, V.; Fuller, D.; Futatani, S.; Fyvie, J.; Gál, K.; Galassi, D.; Gałązka, K.; Galdon-Quiroga, J.; Gallagher, J.; Gallart, D.; Galvão, R.; Gao, X.; Gao, Y.; Garcia, J.; Garcia-Carrasco, A.; García-Muñoz, M.; Gardarein, J.-L.; Garzotti, L.; Gaudio, P.; Gauthier, E.; Gear, D. F.; Gee, S. J.; Geiger, B.; Gelfusa, M.; Gerasimov, S.; Gervasini, G.; Gethins, M.; Ghani, Z.; Ghate, M.; Gherendi, M.; Giacalone, J. C.; Giacomelli, L.; Gibson, C. S.; Giegerich, T.; Gil, C.; Gil, L.; Gilligan, S.; Gin, D.; Giovannozzi, E.; Girardo, J. B.; Giroud, C.; Giruzzi, G.; Glöggler, S.; Godwin, J.; Goff, J.; Gohil, P.; Goloborod'ko, V.; Gomes, R.; Gonçalves, B.; Goniche, M.; Goodliffe, M.; Goodyear, A.; Gorini, G.; Gosk, M.; Goulding, R.; Goussarov, A.; Gowland, R.; Graham, B.; Graham, M. E.; Graves, J. P.; Grazier, N.; Grazier, P.; Green, N. R.; Greuner, H.; Grierson, B.; Griph, F. S.; Grisolia, C.; Grist, D.; Groth, M.; Grove, R.; Grundy, C. N.; Grzonka, J.; Guard, D.; Guérard, C.; Guillemaut, C.; Guirlet, R.; Gurl, C.; Utoh, H. H.; Hackett, L. J.; Hacquin, S.; Hagar, A.; Hager, R.; Hakola, A.; Halitovs, M.; Hall, S. J.; Hallworth Cook, S. P.; Hamlyn-Harris, C.; Hammond, K.; Harrington, C.; Harrison, J.; Harting, D.; Hasenbeck, F.; Hatano, Y.; Hatch, D. R.; Haupt, T. D. V.; Hawes, J.; Hawkes, N. C.; Hawkins, J.; Hawkins, P.; Haydon, P. W.; Hayter, N.; Hazel, S.; Heesterman, P. J. L.; Heinola, K.; Hellesen, C.; Hellsten, T.; Helou, W.; Hemming, O. N.; Hender, T. C.; Henderson, M.; Henderson, S. S.; Henriques, R.; Hepple, D.; Hermon, G.; Hertout, P.; Hidalgo, C.; Highcock, E. G.; Hill, M.; Hillairet, J.; Hillesheim, J.; Hillis, D.; Hizanidis, K.; Hjalmarsson, A.; Hobirk, J.; Hodille, E.; Hogben, C. H. A.; Hogeweij, G. M. D.; Hollingsworth, A.; Hollis, S.; Homfray, D. A.; Horáček, J.; Hornung, G.; Horton, A. R.; Horton, L. D.; Horvath, L.; Hotchin, S. P.; Hough, M. R.; Howarth, P. J.; Hubbard, A.; Huber, A.; Huber, V.; Huddleston, T. M.; Hughes, M.; Huijsmans, G. T. A.; Hunter, C. L.; Huynh, P.; Hynes, A. M.; Iglesias, D.; Imazawa, N.; Imbeaux, F.; Imríšek, M.; Incelli, M.; Innocente, P.; Irishkin, M.; Ivanova-Stanik, I.; Jachmich, S.; Jacobsen, A. S.; Jacquet, P.; Jansons, J.; Jardin, A.; Järvinen, A.; Jaulmes, F.; Jednoróg, S.; Jenkins, I.; Jeong, C.; Jepu, I.; Joffrin, E.; Johnson, R.; Johnson, T.; Johnston, Jane; Joita, L.; Jones, G.; Jones, T. T. C.; Hoshino, K. K.; Kallenbach, A.; Kamiya, K.; Kaniewski, J.; Kantor, A.; Kappatou, A.; Karhunen, J.; Karkinsky, D.; Karnowska, I.; Kaufman, M.; Kaveney, G.; Kazakov, Y.; Kazantzidis, V.; Keeling, D. L.; Keenan, T.; Keep, J.; Kempenaars, M.; Kennedy, C.; Kenny, D.; Kent, J.; Kent, O. N.; Khilkevich, E.; Kim, H. T.; Kim, H. S.; Kinch, A.; king, C.; King, D.; King, R. F.; Kinna, D. J.; Kiptily, V.; Kirk, A.; Kirov, K.; Kirschner, A.; Kizane, G.; Klepper, C.; Klix, A.; Knight, P.; Knipe, S. J.; Knott, S.; Kobuchi, T.; Köchl, F.; Kocsis, G.; Kodeli, I.; Kogan, L.; Kogut, D.; Koivuranta, S.; Kominis, Y.; Köppen, M.; Kos, B.; Koskela, T.; Koslowski, H. R.; Koubiti, M.; Kovari, M.; Kowalska-Strzęciwilk, E.; Krasilnikov, A.; Krasilnikov, V.; Krawczyk, N.; Kresina, M.; Krieger, K.; Krivska, A.; Kruezi, U.; Książek, I.; Kukushkin, A.; Kundu, A.; Kurki-Suonio, T.; Kwak, S.; Kwiatkowski, R.; Kwon, O. J.; Laguardia, L.; Lahtinen, A.; Laing, A.; Lam, N.; Lambertz, H. T.; Lane, C.; Lang, P. T.; Lanthaler, S.; Lapins, J.; Lasa, A.; Last, J. R.; Łaszyńska, E.; Lawless, R.; Lawson, A.; Lawson, K. D.; Lazaros, A.; Lazzaro, E.; Leddy, J.; Lee, S.; Lefebvre, X.; Leggate, H. J.; Lehmann, J.; Lehnen, M.; Leichtle, D.; Leichuer, P.; Leipold, F.; Lengar, I.; Lennholm, M.; Lerche, E.; Lescinskis, A.; Lesnoj, S.; Letellier, E.; Leyland, M.; Leysen, W.; Li, L.; Liang, Y.; Likonen, J.; Linke, J.; Linsmeier, Ch.; Lipschultz, B.; Liu, G.; Liu, Y.; Lo Schiavo, V. P.; Loarer, T.; Loarte, A.; Lobel, R. C.; Lomanowski, B.; Lomas, P. J.; Lönnroth, J.; López, J. M.; López-Razola, J.; Lorenzini, R.; Losada, U.; Lovell, J. J.; Loving, A. B.; Lowry, C.; Luce, T.; Lucock, R. M. A.; Lukin, A.; Luna, C.; Lungaroni, M.; Lungu, C. P.; Lungu, M.; Lunniss, A.; Lupelli, I.; Lyssoivan, A.; Macdonald, N.; Macheta, P.; Maczewa, K.; Magesh, B.; Maget, P.; Maggi, C.; Maier, H.; Mailloux, J.; Makkonen, T.; Makwana, R.; Malaquias, A.; Malizia, A.; Manas, P.; Manning, A.; Manso, M. E.; Mantica, P.; Mantsinen, M.; Manzanares, A.; Maquet, Ph.; Marandet, Y.; Marcenko, N.; Marchetto, C.; Marchuk, O.; Marinelli, M.; Marinucci, M.; Markovič, T.; Marocco, D.; Marot, L.; Marren, C. A.; Marshal, R.; Martin, A.; Martin, Y.; Martín de Aguilera, A.; Martínez, F. J.; Martín-Solís, J. R.; Martynova, Y.; Maruyama, S.; Masiello, A.; Maslov, M.; Matejcik, S.; Mattei, M.; Matthews, G. F.; Maviglia, F.; Mayer, M.; Mayoral, M. L.; May-Smith, T.; Mazon, D.; Mazzotta, C.; McAdams, R.; McCarthy, P. J.; McClements, K. G.; McCormack, O.; McCullen, P. A.; McDonald, D.; McIntosh, S.; McKean, R.; McKehon, J.; Meadows, R. C.; Meakins, A.; Medina, F.; Medland, M.; Medley, S.; Meigh, S.; Meigs, A. G.; Meisl, G.; Meitner, S.; Meneses, L.; Menmuir, S.; Mergia, K.; Merrigan, I. R.; Mertens, Ph.; Meshchaninov, S.; Messiaen, A.; Meyer, H.; Mianowski, S.; Michling, R.; Middleton-Gear, D.; Miettunen, J.; Militello, F.; Militello-Asp, E.; Miloshevsky, G.; Mink, F.; Minucci, S.; Miyoshi, Y.; Mlynář, J.; Molina, D.; Monakhov, I.; Moneti, M.; Mooney, R.; Moradi, S.; Mordijck, S.; Moreira, L.; Moreno, R.; Moro, F.; Morris, A. W.; Morris, J.; Moser, L.; Mosher, S.; Moulton, D.; Murari, A.; Muraro, A.; Murphy, S.; Asakura, N. N.; Na, Y. S.; Nabais, F.; Naish, R.; Nakano, T.; Nardon, E.; Naulin, V.; Nave, M. F. F.; Nedzelski, I.; Nemtsev, G.; Nespoli, F.; Neto, A.; Neu, R.; Neverov, V. S.; Newman, M.; Nicholls, K. J.; Nicolas, T.; Nielsen, A. H.; Nielsen, P.; Nilsson, E.; Nishijima, D.; Noble, C.; Nocente, M.; Nodwell, D.; Nordlund, K.; Nordman, H.; Nouailletas, R.; Nunes, I.; Oberkofler, M.; Odupitan, T.; Ogawa, M. T.; O'Gorman, T.; Okabayashi, M.; Olney, R.; Omolayo, O.; O'Mullane, M.; Ongena, J.; Orsitto, F.; Orszagh, J.; Oswuigwe, B. I.; Otin, R.; Owen, A.; Paccagnella, R.; Pace, N.; Pacella, D.; Packer, L. W.; Page, A.; Pajuste, E.; Palazzo, S.; Pamela, S.; Panja, S.; Papp, P.; Paprok, R.; Parail, V.; Park, M.; Parra Diaz, F.; Parsons, M.; Pasqualotto, R.; Patel, A.; Pathak, S.; Paton, D.; Patten, H.; Pau, A.; Pawelec, E.; Soldan, C. Paz; Peackoc, A.; Pearson, I. J.; Pehkonen, S.-P.; Peluso, E.; Penot, C.; Pereira, A.; Pereira, R.; Pereira Puglia, P. P.; Perez von Thun, C.; Peruzzo, S.; Peschanyi, S.; Peterka, M.; Petersson, P.; Petravich, G.; Petre, A.; Petrella, N.; Petržilka, V.; Peysson, Y.; Pfefferlé, D.; Philipps, V.; Pillon, M.; Pintsuk, G.; Piovesan, P.; Pires dos Reis, A.; Piron, L.; Pironti, A.; Pisano, F.; Pitts, R.; Pizzo, F.; Plyusnin, V.; Pomaro, N.; Pompilian, O. G.; Pool, P. J.; Popovichev, S.; Porfiri, M. T.; Porosnicu, C.; Porton, M.; Possnert, G.; Potzel, S.; Powell, T.; Pozzi, J.; Prajapati, V.; Prakash, R.; Prestopino, G.; Price, D.; Price, M.; Price, R.; Prior, P.; Proudfoot, R.; Pucella, G.; Puglia, P.; Puiatti, M. E.; Pulley, D.; Purahoo, K.; Pütterich, Th.; Rachlew, E.; Rack, M.; Ragona, R.; Rainford, M. S. J.; Rakha, A.; Ramogida, G.; Ranjan, S.; Rapson, C. J.; Rasmussen, J. J.; Rathod, K.; Rattá, G.; Ratynskaia, S.; Ravera, G.; Rayner, C.; Rebai, M.; Reece, D.; Reed, A.; Réfy, D.; Regan, B.; Regaña, J.; Reich, M.; Reid, N.; Reimold, F.; Reinhart, M.; Reinke, M.; Reiser, D.; Rendell, D.; Reux, C.; Reyes Cortes, S. D. A.; Reynolds, S.; Riccardo, V.; Richardson, N.; Riddle, K.; Rigamonti, D.; Rimini, F. G.; Risner, J.; Riva, M.; Roach, C.; Robins, R. J.; Robinson, S. A.; Robinson, T.; Robson, D. W.; Roccella, R.; Rodionov, R.; Rodrigues, P.; Rodriguez, J.; Rohde, V.; Romanelli, F.; Romanelli, M.; Romanelli, S.; Romazanov, J.; Rowe, S.; Rubel, M.; Rubinacci, G.; Rubino, G.; Ruchko, L.; Ruiz, M.; Ruset, C.; Rzadkiewicz, J.; Saarelma, S.; Sabot, R.; Safi, E.; Sagar, P.; Saibene, G.; Saint-Laurent, F.; Salewski, M.; Salmi, A.; Salmon, R.; Salzedas, F.; Samaddar, D.; Samm, U.; Sandiford, D.; Santa, P.; Santala, M. I. K.; Santos, B.; Santucci, A.; Sartori, F.; Sartori, R.; Sauter, O.; Scannell, R.; Schlummer, T.; Schmid, K.; Schmidt, V.; Schmuck, S.; Schneider, M.; Schöpf, K.; Schwörer, D.; Scott, S. D.; Sergienko, G.; Sertoli, M.; Shabbir, A.; Sharapov, S. E.; Shaw, A.; Shaw, R.; Sheikh, H.; Shepherd, A.; Shevelev, A.; Shumack, A.; Sias, G.; Sibbald, M.; Sieglin, B.; Silburn, S.; Silva, A.; Silva, C.; Simmons, P. A.; Simpson, J.; Simpson-Hutchinson, J.; Sinha, A.; Sipilä, S. K.; Sips, A. C. C.; Sirén, P.; Sirinelli, A.; Sjöstrand, H.; Skiba, M.; Skilton, R.; Slabkowska, K.; Slade, B.; Smith, N.; Smith, P. G.; Smith, R.; Smith, T. J.; Smithies, M.; Snoj, L.; Soare, S.; Solano, E. R.; Somers, A.; Sommariva, C.; Sonato, P.; Sopplesa, A.; Sousa, J.; Sozzi, C.; Spagnolo, S.; Spelzini, T.; Spineanu, F.; Stables, G.; Stamatelatos, I.; Stamp, M. F.; Staniec, P.; Stankūnas, G.; Stan-Sion, C.; Stead, M. J.; Stefanikova, E.; Stepanov, I.; Stephen, A. V.; Stephen, M.; Stevens, A.; Stevens, B. D.; Strachan, J.; Strand, P.; Strauss, H. R.; Ström, P.; Stubbs, G.; Studholme, W.; Subba, F.; Summers, H. P.; Svensson, J.; Świderski, Ł.; Szabolics, T.; Szawlowski, M.; Szepesi, G.; Suzuki, T. T.; Tál, B.; Tala, T.; Talbot, A. R.; Talebzadeh, S.; Taliercio, C.; Tamain, P.; Tame, C.; Tang, W.; Tardocchi, M.; Taroni, L.; Taylor, D.; Taylor, K. A.; Tegnered, D.; Telesca, G.; Teplova, N.; Terranova, D.; Testa, D.; Tholerus, E.; Thomas, J.; Thomas, J. D.; Thomas, P.; Thompson, A.; Thompson, C.-A.; Thompson, V. K.; Thorne, L.; Thornton, A.; Thrysøe, A. S.; Tigwell, P. A.; Tipton, N.; Tiseanu, I.; Tojo, H.; Tokitani, M.; Tolias, P.; Tomeš, M.; Tonner, P.; Towndrow, M.; Trimble, P.; Tripsky, M.; Tsalas, M.; Tsavalas, P.; Tskhakaya jun, D.; Turner, I.; Turner, M. M.; Turnyanskiy, M.; Tvalashvili, G.; Tyrrell, S. G. J.; Uccello, A.; Ul-Abidin, Z.; Uljanovs, J.; Ulyatt, D.; Urano, H.; Uytdenhouwen, I.; Vadgama, A. P.; Valcarcel, D.; Valentinuzzi, M.; Valisa, M.; Vallejos Olivares, P.; Valovic, M.; Van De Mortel, M.; Van Eester, D.; Van Renterghem, W.; van Rooij, G. J.; Varje, J.; Varoutis, S.; Vartanian, S.; Vasava, K.; Vasilopoulou, T.; Vega, J.; Verdoolaege, G.; Verhoeven, R.; Verona, C.; Verona Rinati, G.; Veshchev, E.; Vianello, N.; Vicente, J.; Viezzer, E.; Villari, S.; Villone, F.; Vincenzi, P.; Vinyar, I.; Viola, B.; Vitins, A.; Vizvary, Z.; Vlad, M.; Voitsekhovitch, I.; Vondráček, P.; Vora, N.; Vu, T.; Pires de Sa, W. W.; Wakeling, B.; Waldon, C. W. F.; Walkden, N.; Walker, M.; Walker, R.; Walsh, M.; Wang, E.; Wang, N.; Warder, S.; Warren, R. J.; Waterhouse, J.; Watkins, N. W.; Watts, C.; Wauters, T.; Weckmann, A.; Weiland, J.; Weisen, H.; Weiszflog, M.; Wellstood, C.; West, A. T.; Wheatley, M. R.; Whetham, S.; Whitehead, A. M.; Whitehead, B. D.; Widdowson, A. M.; Wiesen, S.; Wilkinson, J.; Williams, J.; Williams, M.; Wilson, A. R.; Wilson, D. J.; Wilson, H. R.; Wilson, J.; Wischmeier, M.; Withenshaw, G.; Withycombe, A.; Witts, D. M.; Wood, D.; Wood, R.; Woodley, C.; Wray, S.; Wright, J.; Wright, J. C.; Wu, J.; Wukitch, S.; Wynn, A.; Xu, T.; Yadikin, D.; Yanling, W.; Yao, L.; Yavorskij, V.; Yoo, M. G.; Young, C.; Young, D.; Young, I. D.; Young, R.; Zacks, J.; Zagorski, R.; Zaitsev, F. S.; Zanino, R.; Zarins, A.; Zastrow, K. D.; Zerbini, M.; Zhang, W.; Zhou, Y.; Zilli, E.; Zoita, V.; Zoletnik, S.; Zychor, I.; JET Contributors
2017-10-01
The 2014-2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L-H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D-T campaign and 14 MeV neutron calibration strategy are reviewed.
United States Research and Development effort on ITER magnet tasks
Martovetsky, Nicolai N.; Reierson, Wayne T.
2011-01-22
This study presents the status of research and development (R&D) magnet tasks that are being performed in support of the U.S. ITER Project Office (USIPO) commitment to provide a central solenoid assembly and toroidal field conductor for the ITER machine to be constructed in Cadarache, France. The following development tasks are presented: winding development, inlets and outlets development, internal and bus joints development and testing, insulation development and qualification, vacuum-pressure impregnation, bus supports, and intermodule structure and materials characterization.
Inversion of Acoustic and Electromagnetic Recordings for Mapping Current Flow in Lightning Strikes
NASA Astrophysics Data System (ADS)
Anderson, J.; Johnson, J.; Arechiga, R. O.; Thomas, R. J.
2012-12-01
Acoustic recordings can be used to map current-carrying conduits in lightning strikes. Unlike stepped leaders, whose very high frequency (VHF) radio emissions have short (meter-scale) wavelengths and can be located by lightning-mapping arrays, current pulses emit longer (kilometer-scale) waves and cannot be mapped precisely by electromagnetic observations alone. While current pulses are constrained to conductive channels created by stepped leaders, these leaders often branch as they propagate, and most branches fail to carry current. Here, we present a method to use thunder recordings to map current pulses, and we apply it to acoustic and VHF data recorded in 2009 in the Magdalena mountains in central New Mexico, USA. Thunder is produced by rapid heating and expansion of the atmosphere along conductive channels in response to current flow, and therefore can be used to recover the geometry of the current-carrying channel. Toward this goal, we use VHF pulse maps to identify candidate conductive channels where we treat each channel as a superposition of finely-spaced acoustic point sources. We apply ray tracing in variable atmospheric structures to forward model the thunder that our microphone network would record for each candidate channel. Because multiple channels could potentially carry current, a non-linear inversion is performed to determine the acoustic source strength of each channel. For each combination of acoustic source strengths, synthetic thunder is modeled as a superposition of thunder signals produced by each channel, and a power envelope of this stack is then calculated. The inversion iteratively minimizes the misfit between power envelopes of recorded and modeled thunder. Because the atmospheric sound speed structure through which the waves propagate during these events is unknown, we repeat the procedure on many plausible atmospheres to find an optimal fit. We then determine the candidate channel, or channels, that minimizes residuals between synthetic and acoustic recordings. We demonstrate the usefulness of this method on both intracloud and cloud-to-ground strikes, and discuss factors affecting our ability to replicate recorded thunder.
A long-term target detection approach in infrared image sequence
NASA Astrophysics Data System (ADS)
Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang
2015-12-01
An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.
Method for protein structure alignment
Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus
2005-02-22
This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Rotation and neoclassical ripple transport in ITER
Paul, Elizabeth Joy; Landreman, Matt; Poli, Francesca M.; ...
2017-07-13
Neoclassical transport in the presence of non-axisymmetric magnetic fields causes a toroidal torque known as neoclassical toroidal viscosity (NTV). The toroidal symmetry of ITER will be broken by the finite number of toroidal field coils and by test blanket modules (TBMs). The addition of ferritic inserts (FIs) will decrease the magnitude of the toroidal field ripple. 3D magnetic equilibria in the presence of toroidal field ripple and ferromagnetic structures are calculated for an ITER steady-state scenario using the Variational Moments Equilibrium Code (VMEC). Furthermore, neoclassical transport quantities in the presence of these error fields are calculated using the Stellarator Fokker-Planckmore » Iterative Neoclassical Conservative Solver (SFINCS).« less
Rotation and neoclassical ripple transport in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Elizabeth Joy; Landreman, Matt; Poli, Francesca M.
Neoclassical transport in the presence of non-axisymmetric magnetic fields causes a toroidal torque known as neoclassical toroidal viscosity (NTV). The toroidal symmetry of ITER will be broken by the finite number of toroidal field coils and by test blanket modules (TBMs). The addition of ferritic inserts (FIs) will decrease the magnitude of the toroidal field ripple. 3D magnetic equilibria in the presence of toroidal field ripple and ferromagnetic structures are calculated for an ITER steady-state scenario using the Variational Moments Equilibrium Code (VMEC). Furthermore, neoclassical transport quantities in the presence of these error fields are calculated using the Stellarator Fokker-Planckmore » Iterative Neoclassical Conservative Solver (SFINCS).« less
Run-time parallelization and scheduling of loops
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay
1991-01-01
Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.
An iterative approach to region growing using associative memories
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Cowart, A.
1983-01-01
Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.
Automatic threshold selection for multi-class open set recognition
NASA Astrophysics Data System (ADS)
Scherreik, Matthew; Rigling, Brian
2017-05-01
Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.
The Cadarache negative ion experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massmann, P.; Bottereau, J.M.; Belchenko, Y.
1995-12-31
Up to energies of 140 keV neutral beam injection (NBI) based on positive ions has proven to be a reliable and flexible plasma heating method and has provided major contributions to most of the important experiments on virtually all large tokamaks around the world. As a candidate for additional heating and current drive on next step fusion machines (ITER ao) it is hoped that NBI can be equally successful. The ITER NBI parameters of 1 MeV, 50 MW D{degree} demand primary D{sup {minus}} beams with current densities of at least 15 mA/cm{sup 2}. Although considerable progress has been made inmore » the area of negative ion production and acceleration the high demands still require substantial and urgent development. Regarding negative ion production Cs seeded plasma sources lead the way. Adding a small amount of Cs to the discharge (Cs seeding) not only increases the negative ion yield by a factor 3--5 but also has the advantage that the discharge can be run at lower pressures. This is beneficial for the reduction of stripping losses in the accelerator. Multi-ampere negative ion production in a large plasma source is studied in the MANTIS experiment. Acceleration and neutralization at ITER relevant parameters is the objective of the 1 MV SINGAP experiment.« less
Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.; ...
2017-03-20
Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less
Gaussian process based intelligent sampling for measuring nano-structure surfaces
NASA Astrophysics Data System (ADS)
Sun, L. J.; Ren, M. J.; Yin, Y. H.
2016-09-01
Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.
A Gaia study of the Hyades open cluster
NASA Astrophysics Data System (ADS)
Reino, Stella; de Bruijne, Jos; Zari, Eleonora; d'Antona, Francesca; Ventura, Paolo
2018-03-01
We present a study of the membership of the Hyades open cluster, derive kinematically-modelled parallaxes of its members, and study the colour-absolute magnitude diagram of the cluster. We use Gaia DR1 Tycho-Gaia Astrometric Solution (TGAS) data complemented by Hipparcos-2 data for bright stars not contained in TGAS. We supplement the astrometric data with radial velocities collected from a dozen literature sources. By assuming that all cluster members move with the mean cluster velocity to within the velocity dispersion, we use the observed and the expected motions of the stars to determine individual cluster membership probabilities. We subsequently derive improved parallaxes through maximum-likelihood kinematic modelling of the cluster. This method has an iterative component to deal with 'outliers', caused for instance by double stars or escaping members. Our method extends an existing method and supports the mixed presence of stars with and without radial velocities. We find 251 candidate members, 200 of which have a literature radial velocity, and 70 of which are new candidate members with TGAS astrometry. The cluster is roughly spherical in its centre but significantly flattened at larger radii. The observed colour-absolute magnitude diagram shows a clear binary sequence. The kinematically-modelled parallaxes that we derive are a factor ˜1.7 / 2.9 more precise than the TGAS / Hipparcos-2 values and allow to derive an extremely sharp main sequence. This sequence shows evidence for fine-detailed structure which is elegantly explained by the full spectrum turbulence model of convection.
Tsai, Shiou-Chuan Sheryl
2018-06-20
Polyketides are a large family of structurally complex natural products including compounds with important bioactivities. Polyketides are biosynthesized by polyketide synthases (PKSs), multienzyme complexes derived evolutionarily from fatty acid synthases (FASs). The focus of this review is to critically compare the properties of FASs with iterative aromatic PKSs, including type II PKSs and fungal type I nonreducing PKSs whose chemical logic is distinct from that of modular PKSs. This review focuses on structural and enzymological studies that reveal both similarities and striking differences between FASs and aromatic PKSs. The potential application of FAS and aromatic PKS structures for bioengineering future drugs and biofuels is highlighted.
Chen, Lin; An, Yixin; Li, Yong-xiang; Li, Chunhui; Shi, Yunsu; Song, Yanchun; Zhang, Dengfeng; Wang, Tianyu; Li, Yu
2017-01-01
Maize grain yield and related traits are complex and are controlled by a large number of genes of small effect or quantitative trait loci (QTL). Over the years, a large number of yield-related QTLs have been identified in maize and deposited in public databases. However, integrating and re-analyzing these data and mining candidate loci for yield-related traits has become a major issue in maize. In this study, we collected information on QTLs conferring maize yield-related traits from 33 published studies. Then, 999 of these QTLs were iteratively projected and subjected to meta-analysis to obtain metaQTLs (MQTLs). A total of 76 MQTLs were found across the maize genome. Based on a comparative genomics strategy, several maize orthologs of rice yield-related genes were identified in these MQTL regions. Furthermore, three potential candidate genes (Gene ID: GRMZM2G359974, GRMZM2G301884, and GRMZM2G083894) associated with kernel size and weight within three MQTL regions were identified using regional association mapping, based on the results of the meta-analysis. This strategy, combining MQTL analysis and regional association mapping, is helpful for functional marker development and rapid identification of candidate genes or loci. PMID:29312420
Three dimensional iterative beam propagation method for optical waveguide devices
NASA Astrophysics Data System (ADS)
Ma, Changbao; Van Keuren, Edward
2006-10-01
The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.
Basile, Livia; Milardi, Danilo; Zeidan, Mouhammed; Raiyn, Jamal; Guccione, Salvatore; Rayan, Anwar
2014-01-01
The human histamine H4 receptor (hH4R), a member of the G-protein coupled receptors (GPCR) family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE) and Iterative Stochastic Elimination (ISE) approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and increase the enrichment factors in a synergistic manner. PMID:25330207
Pappalardo, Matteo; Shachaf, Nir; Basile, Livia; Milardi, Danilo; Zeidan, Mouhammed; Raiyn, Jamal; Guccione, Salvatore; Rayan, Anwar
2014-01-01
The human histamine H4 receptor (hH4R), a member of the G-protein coupled receptors (GPCR) family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE) and Iterative Stochastic Elimination (ISE) approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and increase the enrichment factors in a synergistic manner.
Overview of the JET results in support to ITER
Litaudon, X.; Abduallev, S.; Abhangi, M.; ...
2017-06-15
Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less
Overview of the JET results in support to ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Litaudon, X.; Abduallev, S.; Abhangi, M.
Here, the 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing themore » importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at β N ~ 1.8 and n/n GW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.« less
NASA Technical Reports Server (NTRS)
Brand, J. C.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Gaur, Pallavi; Chaturvedi, Anoop
2017-07-22
The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.
Varga, Szilárd; Jakab, Gergely; Csámpai, Antal; Soós, Tibor
2015-09-18
An organocatalytic iterative assembly line has been developed in which nitromethane was sequentially coupled with two different enones using a combination of pseudoenantiomeric cinchona-based thiourea catalysts. Application of unsaturated aldehydes and ketones in the second step of the iterative sequence allows the construction of cyclic syn-ketols and acyclic compounds with multiple contiguous stereocenters. The combination of the multifunctional substrates and ambident electrophiles rendered some organocatalytic transformations possible that have not yet been realized in bifunctional noncovalent organocatalysis.
Self-consistent field for fragmented quantum mechanical model of large molecular systems.
Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao
2016-01-30
Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu
2013-12-15
The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less
A Technique for Transient Thermal Testing of Thick Structures
NASA Technical Reports Server (NTRS)
Horn, Thomas J.; Richards, W. Lance; Gong, Leslie
1997-01-01
A new open-loop heat flux control technique has been developed to conduct transient thermal testing of thick, thermally-conductive aerospace structures. This technique uses calibration of the radiant heater system power level as a function of heat flux, predicted aerodynamic heat flux, and the properties of an instrumented test article. An iterative process was used to generate open-loop heater power profiles prior to each transient thermal test. Differences between the measured and predicted surface temperatures were used to refine the heater power level command profiles through the iteration process. This iteration process has reduced the effects of environmental and test system design factors, which are normally compensated for by closed-loop temperature control, to acceptable levels. The final revised heater power profiles resulted in measured temperature time histories which deviated less than 25 F from the predicted surface temperatures.
Li, Ke; Deb, Kalyanmoy; Zhang, Qingfu; Zhang, Qiang
2017-09-01
Nondominated sorting (NDS), which divides a population into several nondomination levels (NDLs), is a basic step in many evolutionary multiobjective optimization (EMO) algorithms. It has been widely studied in a generational evolution model, where the environmental selection is performed after generating a whole population of offspring. However, in a steady-state evolution model, where a population is updated right after the generation of a new candidate, the NDS can be extremely time consuming. This is especially severe when the number of objectives and population size become large. In this paper, we propose an efficient NDL update method to reduce the cost for maintaining the NDL structure in steady-state EMO. Instead of performing the NDS from scratch, our method only updates the NDLs of a limited number of solutions by extracting the knowledge from the current NDL structure. Notice that our NDL update method is performed twice at each iteration. One is after the reproduction, the other is after the environmental selection. Extensive experiments fully demonstrate that, comparing to the other five state-of-the-art NDS methods, our proposed method avoids a significant amount of unnecessary comparisons, not only in the synthetic data sets, but also in some real optimization scenarios. Last but not least, we find that our proposed method is also useful for the generational evolution model.
Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff
Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; ...
2015-06-01
The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Kun; Bannister, Mark E.; Meyer, Fred W.
Here, in a magnetic fusion energy (MFE) device, the plasma-facing materials (PFMs) will be subjected to tremendous fluxes of ions, heat, and neutrons. The response of PFMs to the fusion environment is still not well defined. Tungsten metal is the present candidate of choice for PFM applications such as the divertor in ITER. However, tungsten's microstructure will evolve in service, possibly to include recrystallization. How tungsten's response to plasma exposure evolves with changes in microstructure is presently unknown. In this work, we have exposed hot-worked and recrystallized tungsten to an 80 eV helium ion beam at a temperature of 900more » °C to fluences of 2 × 10 23 or 20 × 10 23 He/m 2. This resulted in a faceted surface structure at the lower fluence or short but well-developed nanofuzz structure at the higher fluence. There was little difference in the hot-rolled or recrystallized material's near-surface (≤50 nm) bubbles at either fluence. At higher fluence and deeper depth, the bubble populations of the hot-rolled and recrystallized were different, the recrystallized being larger and deeper. This may explain previous high-fluence results showing pronounced differences in recrystallized material. The deeper penetration in recrystallized material also implies that grain boundaries are traps, rather than high-diffusivity paths.« less
Characterization and damaging law of CFC for high heat flux actively cooled plasma facing components
NASA Astrophysics Data System (ADS)
Chevet, G.; Martin, E.; Boscary, J.; Camus, G.; Herb, V.; Schlosser, J.; Escourbiac, F.; Missirlian, M.
2011-10-01
The carbon fiber reinforced carbon composite (CFC) Sepcarb N11 has been used in the Tore Supra (TS) tokamak (Cadarache, France) as armour material for the plasma facing components. For the fabrication of the Wendelstein 7-X (W7-X) divertor (Greifswald, Germany), the NB31 material was chosen. For the fabrication of the ITER divertor, two potential CFC candidates are the NB31 and NB41 materials. In the case of Tore Supra, defects such as microcracks or debonding were found at the interface between CFC tile and copper heat sink. A mechanical characterization of the behaviour of N11 and NB31 was undertaken, allowing the identification of a damage model and finite element calculations both for flat tiles (TS and W7-X) and monoblock (ITER) armours. The mechanical responses of these CFC materials were found almost linear under on-axis tensile tests but highly nonlinear under shear tests or off-axis tensile tests. As a consequence, damage develops within the high shear-stress zones.
Atomic oxygen durability of solar concentrator materials for Space Station Freedom
NASA Technical Reports Server (NTRS)
Degroh, Kim K.; Terlep, Judith A.; Dever, Therese M.
1990-01-01
The findings are reviewed of atomic oxygen exposure testing of candidate solar concentrator materials containing SiO2 and Al2O3 protective coatings for use on Space Station Freedom solar dynamic power modules. Both continuous and iterative atomic oxygen exposure tests were conducted. Iterative air plasma ashing resulted in larger specular reflectance decreases and solar absorptance increases than continuous ashing to the same fluence, and appears to provide a more severe environment than the continuous atomic oxygen exposure that would occur in the low Earth orbit environment. First generation concentrator fabrication techniques produced surface defects including scratches, macroscopic bumps, dendritic regions, porosity, haziness, and pin hole defects. Several of these defects appear to be preferential sites for atomic oxygen attack leading to erosive undercutting. Extensive undercutting and flaking of reflective and protective coatings were found to be promoted through an undercutting tearing propagation process. Atomic oxygen erosion processes and effects on optical performance is presented.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Grant, Sean; Agniel, Denis; Almirall, Daniel; Burkhart, Q; Hunter, Sarah B; McCaffrey, Daniel F; Pedersen, Eric R; Ramchand, Rajeev; Griffin, Beth Ann
2017-12-19
Over 1.6 million adolescents in the United States meet criteria for substance use disorders (SUDs). While there are promising treatments for SUDs, adolescents respond to these treatments differentially in part based on the setting in which treatments are delivered. One way to address such individualized response to treatment is through the development of adaptive interventions (AIs): sequences of decision rules for altering treatment based on an individual's needs. This protocol describes a project with the overarching goal of beginning the development of AIs that provide recommendations for altering the setting of an adolescent's substance use treatment. This project has three discrete aims: (1) explore the views of various stakeholders (parents, providers, policymakers, and researchers) on deciding the setting of substance use treatment for an adolescent based on individualized need, (2) generate hypotheses concerning candidate AIs, and (3) compare the relative effectiveness among candidate AIs and non-adaptive interventions commonly used in everyday practice. This project uses a mixed-methods approach. First, we will conduct an iterative stakeholder engagement process, using RAND's ExpertLens online system, to assess the importance of considering specific individual needs and clinical outcomes when deciding the setting for an adolescent's substance use treatment. Second, we will use results from the stakeholder engagement process to analyze an observational longitudinal data set of 15,656 adolescents in substance use treatment, supported by the Substance Abuse and Mental Health Services Administration, using the Global Appraisal of Individual Needs questionnaire. We will utilize methods based on Q-learning regression to generate hypotheses about candidate AIs. Third, we will use robust statistical methods that aim to appropriately handle casemix adjustment on a large number of covariates (marginal structural modeling and inverse probability of treatment weights) to compare the relative effectiveness among candidate AIs and non-adaptive decision rules that are commonly used in everyday practice. This project begins filling a major gap in clinical and research efforts for adolescents in substance use treatment. Findings could be used to inform the further development and revision of influential multi-dimensional assessment and treatment planning tools, or lay the foundation for subsequent experiments to further develop or test AIs for treatment planning.
Increasing Resilience to Traumatic Stress: Understanding the Protective Role of Well-Being.
Tory Toole, J; Rice, Mark A; Cargill, Jordan; Craddock, Travis J A; Nierenberg, Barry; Klimas, Nancy G; Fletcher, Mary Ann; Morris, Mariana; Zysman, Joel; Broderick, Gordon
2018-01-01
The brain maintains homeostasis in part through a network of feedback and feed-forward mechanisms, where neurochemicals and immune markers act as mediators. Using a previously constructed model of biobehavioral feedback, we found that in addition to healthy equilibrium another stable regulatory program supported chronic depression and anxiety. Exploring mechanisms that might underlie the contributions of subjective well-being to improved therapeutic outcomes in depression, we iteratively screened 288 candidate feedback patterns linking well-being to molecular signaling networks for those that maintained the original homeostatic regimes. Simulating stressful trigger events on each candidate network while maintaining high levels of subjective well-being isolated a specific feedback network where well-being was promoted by dopamine and acetylcholine, and itself promoted norepinephrine while inhibiting cortisol expression. This biobehavioral feedback mechanism was especially effective in reproducing well-being's clinically documented ability to promote resilience and protect against onset of depression and anxiety.
Choosing order of operations to accelerate strip structure analysis in parameter range
NASA Astrophysics Data System (ADS)
Kuksenko, S. P.; Akhunov, R. R.; Gazizov, T. R.
2018-05-01
The paper considers the issue of using iteration methods in solving the sequence of linear algebraic systems obtained in quasistatic analysis of strip structures with the method of moments. Using the analysis of 4 strip structures, the authors have proved that additional acceleration (up to 2.21 times) of the iterative process can be obtained during the process of solving linear systems repeatedly by means of choosing a proper order of operations and a preconditioner. The obtained results can be used to accelerate the process of computer-aided design of various strip structures. The choice of the order of operations to accelerate the process is quite simple, universal and could be used not only for strip structure analysis but also for a wide range of computational problems.
Updating the OMERACT Filter: Implications for imaging and soluble biomarkers
D’Agostino, Maria-Antonietta; Boers, Maarten; Kirwan, John; van der Heijde, Desirée; Østergaard, Mikkel; Schett, Georg; Landewé, Robert B.M.; Maksymowych, Walter P.; Naredo, Esperanza; Dougados, Maxime; Iagnocco, Annamaria; Bingham, Clifton O.; Brooks, Peter; Beaton, Dorcas; Gandjbakhch, Frederique; Gossec, Laure; Guillemin, Francis; Hewlett, Sarah; Kloppenburg, Margreet; March, Lyn; Mease, Philip J; Moller, Ingrid; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Wakefield, Richard J; Wells, George; Tugwell, Peter; Conaghan, Philip G
2014-01-01
Objective The OMERACT Filter provides a framework for the validation of outcome measures for use in rheumatology clinical research. However, imaging and biochemical measures may face additional validation challenges due to their technical nature. The Imaging and Soluble Biomarker Session at OMERACT 11 aimed to provide a guide for the iterative development of an imaging or biochemical measurement instrument so it can be used in therapeutic assessment. Methods A hierarchical structure was proposed, reflecting 3 dimensions needed for validating an imaging or biochemical measurement instrument: outcome domain(s), study setting and performance of the instrument. Movement along the axes in any dimension reflects increasing validation. For a given test instrument, the 3-axis structure assesses the extent to which the instrument is a validated measure for the chosen domain, whether it assesses a patient or disease centred-variable, and whether its technical performance is adequate in the context of its application. Some currently used imaging and soluble biomarkers for rheumatoid arthritis, spondyloarthritis and knee osteoarthritis were then evaluated using the original OMERACT filter and the newly proposed structure. Break-out groups critically reviewed the extent to which the candidate biomarkers complied with the proposed step-wise approach, as a way of examining the utility of the proposed 3 dimensional structure. Results Although there was a broad acceptance of the value of the proposed structure in general, some areas for improvement were suggested including clarification of criteria for achieving a certain level of validation and how to deal with extension of the structure to areas beyond clinical trials. Conclusion General support was obtained for a proposed tri-axis structure to assess validation of imaging and soluble biomarkers; nevertheless, additional work is required to better evaluate its place within the OMERACT Filter 2.0. PMID:24584916
Updating the OMERACT filter: implications for imaging and soluble biomarkers.
D'Agostino, Maria-Antonietta; Boers, Maarten; Kirwan, John; van der Heijde, Désirée; Østergaard, Mikkel; Schett, Georg; Landewé, Robert B; Maksymowych, Walter P; Naredo, Esperanza; Dougados, Maxime; Iagnocco, Annamaria; Bingham, Clifton O; Brooks, Peter M; Beaton, Dorcas E; Gandjbakhch, Frederique; Gossec, Laure; Guillemin, Francis; Hewlett, Sarah E; Kloppenburg, Margreet; March, Lyn; Mease, Philip J; Moller, Ingrid; Simon, Lee S; Singh, Jasvinder A; Strand, Vibeke; Wakefield, Richard J; Wells, George A; Tugwell, Peter; Conaghan, Philip G
2014-05-01
The Outcome Measures in Rheumatology (OMERACT) Filter provides a framework for the validation of outcome measures for use in rheumatology clinical research. However, imaging and biochemical measures may face additional validation challenges because of their technical nature. The Imaging and Soluble Biomarker Session at OMERACT 11 aimed to provide a guide for the iterative development of an imaging or biochemical measurement instrument so it can be used in therapeutic assessment. A hierarchical structure was proposed, reflecting 3 dimensions needed for validating an imaging or biochemical measurement instrument: outcome domain(s), study setting, and performance of the instrument. Movement along the axes in any dimension reflects increasing validation. For a given test instrument, the 3-axis structure assesses the extent to which the instrument is a validated measure for the chosen domain, whether it assesses a patient-centered or disease-centered variable, and whether its technical performance is adequate in the context of its application. Some currently used imaging and soluble biomarkers for rheumatoid arthritis, spondyloarthritis, and knee osteoarthritis were then evaluated using the original OMERACT Filter and the newly proposed structure. Breakout groups critically reviewed the extent to which the candidate biomarkers complied with the proposed stepwise approach, as a way of examining the utility of the proposed 3-dimensional structure. Although there was a broad acceptance of the value of the proposed structure in general, some areas for improvement were suggested including clarification of criteria for achieving a certain level of validation and how to deal with extension of the structure to areas beyond clinical trials. General support was obtained for a proposed tri-axis structure to assess validation of imaging and soluble biomarkers; nevertheless, additional work is required to better evaluate its place within the OMERACT Filter 2.0.
Effective progression of nuclear magnetic resonance-detected fragment hits.
Eaton, Hugh L; Wyss, Daniel F
2011-01-01
Fragment-based drug discovery (FBDD) has become increasingly popular over the last decade as an alternate lead generation tool to HTS approaches. Several compounds have now progressed into the clinic which originated from a fragment-based approach, demonstrating the utility of this emerging field. While fragment hit identification has become much more routine and may involve different screening approaches, the efficient progression of fragment hits into quality lead series may still present a major bottleneck for the broadly successful application of FBDD. In our laboratory, we have extensive experience in fragment-based NMR screening (SbN) and the subsequent iterative progression of fragment hits using structure-assisted chemistry. To maximize impact, we have applied this approach strategically to early- and high-priority targets, and those struggling for leads. Its application has yielded a clinical candidate for BACE1 and lead series in about one third of the SbN/FBDD projects. In this chapter, we will give an overview of our strategy and focus our discussion on NMR-based FBDD approaches. Copyright © 2011 Elsevier Inc. All rights reserved.
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components
NASA Astrophysics Data System (ADS)
Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.
2017-12-01
In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Numerical analysis of modified Central Solenoid insert design
Khodak, Andrei; Martovetsky, Nicolai; Smirnov, Aleksandre; ...
2015-06-21
The United States ITER Project Office (USIPO) is responsible for fabrication of the Central Solenoid (CS) for ITER project. The ITER machine is currently under construction by seven parties in Cadarache, France. The CS Insert (CSI) project should provide a verification of the conductor performance in relevant conditions of temperature, field, currents and mechanical strain. The US IPO designed the CSI that will be tested at the Central Solenoid Model Coil (CSMC) Test Facility at JAEA, Naka. To validate the modified design we performed three-dimensional numerical simulations using coupled solver for simultaneous structural, thermal and electromagnetic analysis. Thermal and electromagneticmore » simulations supported structural calculations providing necessary loads and strains. According to current analysis design of the modified coil satisfies ITER magnet structural design criteria for the following conditions: (1) room temperature, no current, (2) temperature 4K, no current, (3) temperature 4K, current 60 kA direct charge, and (4) temperature 4K, current 60 kA reverse charge. Fatigue life assessment analysis is performed for the alternating conditions of: temperature 4K, no current, and temperature 4K, current 45 kA direct charge. Results of fatigue analysis show that parts of the coil assembly can be qualified for up to 1 million cycles. Distributions of the Current Sharing Temperature (TCS) in the superconductor were obtained from numerical results using parameterization of the critical surface in the form similar to that proposed for ITER. Lastly, special ADPL scripts were developed for ANSYS allowing one-dimensional representation of TCS along the cable, as well as three-dimensional fields of TCS in superconductor material. Published by Elsevier B.V.« less
How children perceive fractals: Hierarchical self-similarity and cognitive development
Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh
2014-01-01
The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884
NASA Astrophysics Data System (ADS)
Molde, H.; Zwick, D.; Muskulus, M.
2014-12-01
Support structures for offshore wind turbines are contributing a large part to the total project cost, and a cost saving of a few percent would have considerable impact. At present support structures are designed with simplified methods, e.g., spreadsheet analysis, before more detailed load calculations are performed. Due to the large number of loadcases only a few semimanual design iterations are typically executed. Computer-assisted optimization algorithms could help to further explore design limits and avoid unnecessary conservatism. In this study the simultaneous perturbation stochastic approximation method developed by Spall in the 1990s was assessed with respect to its suitability for support structure optimization. The method depends on a few parameters and an objective function that need to be chosen carefully. In each iteration the structure is evaluated by time-domain analyses, and joint fatigue lifetimes and ultimate strength utilization are computed from stress concentration factors. A pseudo-gradient is determined from only two analysis runs and the design is adjusted in the direction that improves it the most. The algorithm is able to generate considerably improved designs, compared to other methods, in a few hundred iterations, which is demonstrated for the NOWITECH 10 MW reference turbine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ji; Fischer, Debra A.; Boyajian, Tabetha S.
We report the latest Planet Hunter results, including PH2 b, a Jupiter-size (R{sub PL} = 10.12 ± 0.56 R{sub ⊕}) planet orbiting in the habitable zone of a solar-type star. PH2 b was elevated from candidate status when a series of false-positive tests yielded a 99.9% confidence level that transit events detected around the star KIC 12735740 had a planetary origin. Planet Hunter volunteers have also discovered 42 new planet candidates in the Kepler public archive data, of which 33 have at least 3 transits recorded. Most of these transit candidates have orbital periods longer than 100 days and 20more » are potentially located in the habitable zones of their host stars. Nine candidates were detected with only two transit events and the prospective periods are longer than 400 days. The photometric models suggest that these objects have radii that range between those of Neptune and Jupiter. These detections nearly double the number of gas-giant planet candidates orbiting at habitable-zone distances. We conducted spectroscopic observations for nine of the brighter targets to improve the stellar parameters and we obtained adaptive optics imaging for four of the stars to search for blended background or foreground stars that could confuse our photometric modeling. We present an iterative analysis method to derive the stellar and planet properties and uncertainties by combining the available spectroscopic parameters, stellar evolution models, and transiting light curve parameters, weighted by the measurement errors. Planet Hunters is a citizen science project that crowd sources the assessment of NASA Kepler light curves. The discovery of these 43 planet candidates demonstrates the success of citizen scientists at identifying planet candidates, even in longer period orbits with only two or three transit events.« less
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
A novel dynamical community detection algorithm based on weighting scheme
NASA Astrophysics Data System (ADS)
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
NASA Astrophysics Data System (ADS)
Knaster, J.; Evans, D.; Rajainmaki, H.
2012-06-01
The pre-compression rings (PCRs) for the International Thermonuclear Experimental Reactor (ITER) represent one of the largest and most highly stressed composite structures ever designed for long term operation at 4K. Three rings, each 5m in diameter and 337 × 288 mm in cross-section, will be installed at the top and bottom of the eighteen "D" shaped Toroidal Field (TF) coils to apply a total centripetal load of 70 MN per TF coil. The interaction of the 68 kA conductor current circulating in the coil (for a total of 9.1MA) with the required magnetic field to confine the plasma during operation will result in Lorentz forces that build in-plane and out-of-plane loads. The PCRs are essential to keep the stresses below the acceptable level for the ITER magnets structural materials.
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
NASA Astrophysics Data System (ADS)
Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team
2018-04-01
Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.
Fast Vessel Detection in Gaofen-3 SAR Images with Ultrafine Strip-Map Mode
Liu, Lei; Qiu, Xiaolan; Lei, Bin
2017-01-01
This study aims to detect vessels with lengths ranging from about 70 to 300 m, in Gaofen-3 (GF-3) SAR images with ultrafine strip-map (UFS) mode as fast as possible. Based on the analysis of the characteristics of vessels in GF-3 SAR imagery, an effective vessel detection method is proposed in this paper. Firstly, the iterative constant false alarm rate (CFAR) method is employed to detect the potential ship pixels. Secondly, the mean-shift operation is applied on each potential ship pixel to identify the candidate target region. During the mean-shift process, we maintain a selection matrix recording which pixels can be taken, and these pixels are called as the valid points of the candidate target. The l1 norm regression is used to extract the principal axis and detect the valid points. Finally, two kinds of false alarms, the bright line and the azimuth ambiguity, are removed by comparing the valid area of the candidate target with a pre-defined value and computing the displacement between the true target and the corresponding replicas respectively. Experimental results on three GF-3 SAR images with UFS mode demonstrate the effectiveness and efficiency of the proposed method. PMID:28678197
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.
Fast ground filtering for TLS data via Scanline Density Analysis
NASA Astrophysics Data System (ADS)
Che, Erzhuo; Olsen, Michael J.
2017-07-01
Terrestrial Laser Scanning (TLS) efficiently collects 3D information based on lidar (light detection and ranging) technology. TLS has been widely used in topographic mapping, engineering surveying, forestry, industrial facilities, cultural heritage, and so on. Ground filtering is a common procedure in lidar data processing, which separates the point cloud data into ground points and non-ground points. Effective ground filtering is helpful for subsequent procedures such as segmentation, classification, and modeling. Numerous ground filtering algorithms have been developed for Airborne Laser Scanning (ALS) data. However, many of these are error prone in application to TLS data because of its different angle of view and highly variable resolution. Further, many ground filtering techniques are limited in application within challenging topography and experience difficulty coping with some objects such as short vegetation, steep slopes, and so forth. Lastly, due to the large size of point cloud data, operations such as data traversing, multiple iterations, and neighbor searching significantly affect the computation efficiency. In order to overcome these challenges, we present an efficient ground filtering method for TLS data via a Scanline Density Analysis, which is very fast because it exploits the grid structure storing TLS data. The process first separates the ground candidates, density features, and unidentified points based on an analysis of point density within each scanline. Second, a region growth using the scan pattern is performed to cluster the ground candidates and further refine the ground points (clusters). In the experiment, the effectiveness, parameter robustness, and efficiency of the proposed method is demonstrated with datasets collected from an urban scene and a natural scene, respectively.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hanming; Wang, Linyuan; Li, Lei
2016-06-15
Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less
DOT National Transportation Integrated Search
2006-02-01
The process of selecting candidate structures (and : appropriate components of structures) for lithium treatment : invariably involves sampling one or several components : of the structures for laboratory investigations, particularly : petrographic e...
Reduction of asymmetric wall force in ITER disruptions with fast current quench
NASA Astrophysics Data System (ADS)
Strauss, H.
2018-02-01
One of the problems caused by disruptions in tokamaks is the asymmetric electromechanical force produced in conducting structures surrounding the plasma. The asymmetric wall force in ITER asymmetric vertical displacement event (AVDE) disruptions is calculated in nonlinear 3D MHD simulations. It is found that the wall force can vary by almost an order of magnitude, depending on the ratio of the current quench time to the resistive wall magnetic penetration time. In ITER, this ratio is relatively low, resulting in a low asymmetric wall force. In JET, this ratio is relatively high, resulting in a high asymmetric wall force. Previous extrapolations based on JET measurements have greatly overestimated the ITER wall force. It is shown that there are two limiting regimes of AVDEs, and it is explained why the asymmetric wall force is different in the two limits.
Controlled iterative cross-coupling: on the way to the automation of organic synthesis.
Wang, Congyang; Glorius, Frank
2009-01-01
Repetition does not hurt! New strategies for the modulation of the reactivity of difunctional building blocks are discussed, allowing the palladium-catalyzed controlled iterative cross-coupling and, thus, the efficient formation of complex molecules of defined size and structure (see scheme). As in peptide synthesis, this development will enable the automation of these reactions. M(PG)=protected metal, M(act)=metal.
Iterated learning and the evolution of language.
Kirby, Simon; Griffiths, Tom; Smith, Kenny
2014-10-01
Iterated learning describes the process whereby an individual learns their behaviour by exposure to another individual's behaviour, who themselves learnt it in the same way. It can be seen as a key mechanism of cultural evolution. We review various methods for understanding how behaviour is shaped by the iterated learning process: computational agent-based simulations; mathematical modelling; and laboratory experiments in humans and non-human animals. We show how this framework has been used to explain the origins of structure in language, and argue that cultural evolution must be considered alongside biological evolution in explanations of language origins. Copyright © 2014 Elsevier Ltd. All rights reserved.
Iterative tailoring of optical quantum states with homodyne measurements.
Etesse, Jean; Kanseri, Bhaskar; Tualle-Brouri, Rosa
2014-12-01
As they can travel long distances, free space optical quantum states are good candidates for carrying information in quantum information technology protocols. These states, however, are often complex to produce and require protocols whose success probability drops quickly with an increase of the mean photon number. Here we propose a new protocol for the generation and growth of arbitrary states, based on one by one coherent adjunctions of the simple state superposition α|0〉 + β|1〉. Due to the nature of the protocol, which allows for the use of quantum memories, it can lead to high performances.
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
NASA Astrophysics Data System (ADS)
Koga, Kusuto; Hayashi, Yuichiro; Hirose, Tomoaki; Oda, Masahiro; Kitasaka, Takayuki; Igami, Tsuyoshi; Nagino, Masato; Mori, Kensaku
2014-03-01
In this paper, we propose an automated biliary tract extraction method from abdominal CT volumes. The biliary tract is the path by which bile is transported from liver to the duodenum. No extraction method have been reported for the automated extraction of the biliary tract from common contrast CT volumes. Our method consists of three steps including: (1) extraction of extrahepatic bile duct (EHBD) candidate regions, (2) extraction of intrahepatic bile duct (IHBD) candidate regions, and (3) combination of these candidate regions. The IHBD has linear structures and intensities of the IHBD are low in CT volumes. We use a dark linear structure enhancement (DLSE) filter based on a local intensity structure analysis method using the eigenvalues of the Hessian matrix for the IHBD candidate region extraction. The EHBD region is extracted using a thresholding process and a connected component analysis. In the combination process, we connect the IHBD candidate regions to each EHBD candidate region and select a bile duct region from the connected candidate regions. We applied the proposed method to 22 cases of CT volumes. An average Dice coefficient of extraction result was 66.7%.
Triple/quadruple patterning layout decomposition via linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2017-04-01
As the feature size of the semiconductor technology scales down to 10 nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies, such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL), and directed self-assembly. Due to the delay of EUVL and EBL, triple and even quadruple patterning is considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, whereas it is forbidden for contact and via layers. We focus on the application of layout decomposition where stitching is not allowed, such as for contact and via layers. We propose a linear programming (LP) and iterative rounding solving technique to reduce the number of nonintegers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
Hu, Jun; Liu, Zi; Yu, Dong-Jun; Zhang, Yang
2018-02-15
Sequence-order independent structural comparison, also called structural alignment, of small ligand molecules is often needed for computer-aided virtual drug screening. Although many ligand structure alignment programs are proposed, most of them build the alignments based on rigid-body shape comparison which cannot provide atom-specific alignment information nor allow structural variation; both abilities are critical to efficient high-throughput virtual screening. We propose a novel ligand comparison algorithm, LS-align, to generate fast and accurate atom-level structural alignments of ligand molecules, through an iterative heuristic search of the target function that combines inter-atom distance with mass and chemical bond comparisons. LS-align contains two modules of Rigid-LS-align and Flexi-LS-align, designed for rigid-body and flexible alignments, respectively, where a ligand-size independent, statistics-based scoring function is developed to evaluate the similarity of ligand molecules relative to random ligand pairs. Large-scale benchmark tests are performed on prioritizing chemical ligands of 102 protein targets involving 1,415,871 candidate compounds from the DUD-E (Database of Useful Decoys: Enhanced) database, where LS-align achieves an average enrichment factor (EF) of 22.0 at the 1% cutoff and the AUC score of 0.75, which are significantly higher than other state-of-the-art methods. Detailed data analyses show that the advanced performance is mainly attributed to the design of the target function that combines structural and chemical information to enhance the sensitivity of recognizing subtle difference of ligand molecules and the introduces of structural flexibility that help capture the conformational changes induced by the ligand-receptor binding interactions. These data demonstrate a new avenue to improve the virtual screening efficiency through the development of sensitive ligand structural alignments. http://zhanglab.ccmb.med.umich.edu/LS-align/. njyudj@njust.edu.cn or zhng@umich.edu. Supplementary data are available at Bioinformatics online.
Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities
2009-02-01
levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Deep learning and shapes similarity for joint segmentation and tracing single neurons in SEM images
NASA Astrophysics Data System (ADS)
Rao, Qiang; Xiao, Chi; Han, Hua; Chen, Xi; Shen, Lijun; Xie, Qiwei
2017-02-01
Extracting the structure of single neurons is critical for understanding how they function within the neural circuits. Recent developments in microscopy techniques, and the widely recognized need for openness and standardization provide a community resource for automated reconstruction of dendritic and axonal morphology of single neurons. In order to look into the fine structure of neurons, we use the Automated Tape-collecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) to get images sequence of serial sections of animal brain tissue that densely packed with neurons. Different from other neuron reconstruction method, we propose a method that enhances the SEM images by detecting the neuronal membranes with deep convolutional neural network (DCNN) and segments single neurons by active contour with group shape similarity. We joint the segmentation and tracing together and they interact with each other by alternate iteration that tracing aids the selection of candidate region patch for active contour segmentation while the segmentation provides the neuron geometrical features which improve the robustness of tracing. The tracing model mainly relies on the neuron geometrical features and is updated after neuron being segmented on the every next section. Our method enables the reconstruction of neurons of the drosophila mushroom body which is cut to serial sections and imaged under SEM. Our method provides an elementary step for the whole reconstruction of neuronal networks.
Effect of starting microstructure on helium plasma-materials interaction in tungsten
Wang, Kun; Bannister, Mark E.; Meyer, Fred W.; ...
2016-11-24
Here, in a magnetic fusion energy (MFE) device, the plasma-facing materials (PFMs) will be subjected to tremendous fluxes of ions, heat, and neutrons. The response of PFMs to the fusion environment is still not well defined. Tungsten metal is the present candidate of choice for PFM applications such as the divertor in ITER. However, tungsten's microstructure will evolve in service, possibly to include recrystallization. How tungsten's response to plasma exposure evolves with changes in microstructure is presently unknown. In this work, we have exposed hot-worked and recrystallized tungsten to an 80 eV helium ion beam at a temperature of 900more » °C to fluences of 2 × 10 23 or 20 × 10 23 He/m 2. This resulted in a faceted surface structure at the lower fluence or short but well-developed nanofuzz structure at the higher fluence. There was little difference in the hot-rolled or recrystallized material's near-surface (≤50 nm) bubbles at either fluence. At higher fluence and deeper depth, the bubble populations of the hot-rolled and recrystallized were different, the recrystallized being larger and deeper. This may explain previous high-fluence results showing pronounced differences in recrystallized material. The deeper penetration in recrystallized material also implies that grain boundaries are traps, rather than high-diffusivity paths.« less
Convergence of quasiparticle self-consistent GW calculations of transition metal monoxides
NASA Astrophysics Data System (ADS)
Das, Suvadip; Coulter, John E.; Manousakis, Efstratios
2015-03-01
We have investigated the electronic structure of the transition metal monoxides MnO, CoO, and NiO in their undistorted rock-salt structure within a fully iterated quasiparticle self-consistent GW (QPscGW) scheme. We have studied the convergence of the QPscGW method, i.e., how the quasiparticle energy eigenvalues and wavefunctions converge as a function of the QPscGW iterations, and compared the converged outputs obtained from different starting wavefunctions. We found that the convergence is slow and that a one-shot G0W0 calculation does not significantly improve the initial eigenvalues and states. In some cases the ``path'' to convergence may go through energy band reordering which cannot be captured by the simple initial unperturbed Hamiltonian. When a fully iterated solution is reached, the converged density of states, band-gaps and magnetic moments of these oxides are found to be only weakly dependent on the choice of the starting wavefunctions and in reasonable agreement with the experiment. National High Magnetic Field Laboratory.
NASA Astrophysics Data System (ADS)
Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter
2014-05-01
This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.
A novel color image encryption scheme using alternate chaotic mapping structure
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Zhao, Yuanyuan; Zhang, Huili; Guo, Kang
2016-07-01
This paper proposes an color image encryption algorithm using alternate chaotic mapping structure. Initially, we use the R, G and B components to form a matrix. Then one-dimension logistic and two-dimension logistic mapping is used to generate a chaotic matrix, then iterate two chaotic mappings alternately to permute the matrix. For every iteration, XOR operation is adopted to encrypt plain-image matrix, then make further transformation to diffuse the matrix. At last, the encrypted color image is obtained from the confused matrix. Theoretical analysis and experimental results has proved the cryptosystem is secure and practical, and it is suitable for encrypting color images.
Conceptual design of ACB-CP for ITER cryogenic system
NASA Astrophysics Data System (ADS)
Jiang, Yongcheng; Xiong, Lianyou; Peng, Nan; Tang, Jiancheng; Liu, Liqiang; Zhang, Liang
2012-06-01
ACB-CP (Auxiliary Cold Box for Cryopumps) is used to supply the cryopumps system with necessary cryogen in ITER (International Thermonuclear Experimental Reactor) cryogenic distribution system. The conceptual design of ACB-CP contains thermo-hydraulic analysis, 3D structure design and strength checking. Through the thermohydraulic analysis, the main specifications of process valves, pressure safety valves, pipes, heat exchangers can be decided. During the 3D structure design process, vacuum requirement, adiabatic requirement, assembly constraints and maintenance requirement have been considered to arrange the pipes, valves and other components. The strength checking has been performed to crosscheck if the 3D design meets the strength requirements for the ACB-CP.
Yu, Han; Hageman Blair, Rachael
2016-01-01
Understanding community structure in networks has received considerable attention in recent years. Detecting and leveraging community structure holds promise for understanding and potentially intervening with the spread of influence. Network features of this type have important implications in a number of research areas, including, marketing, social networks, and biology. However, an overwhelming majority of traditional approaches to community detection cannot readily incorporate information of node attributes. Integrating structural and attribute information is a major challenge. We propose a exible iterative method; inverse regularized Markov Clustering (irMCL), to network clustering via the manipulation of the transition probability matrix (aka stochastic flow) corresponding to a graph. Similar to traditional Markov Clustering, irMCL iterates between "expand" and "inflate" operations, which aim to strengthen the intra-cluster flow, while weakening the inter-cluster flow. Attribute information is directly incorporated into the iterative method through a sigmoid (logistic function) that naturally dampens attribute influence that is contradictory to the stochastic flow through the network. We demonstrate advantages and the exibility of our approach using simulations and real data. We highlight an application that integrates breast cancer gene expression data set and a functional network defined via KEGG pathways reveal significant modules for survival.
Development of strain tolerant thermal barrier coating systems, tasks 1 - 3
NASA Technical Reports Server (NTRS)
Anderson, N. P.; Sheffler, K. D.
1983-01-01
Insulating ceramic thermal barrier coatings can reduce gas turbine airfoil metal temperatures as much as 170 C (about 300 F), providing fuel efficiency improvements greater than one percent and durability improvements of 2 to 3X. The objective was to increase the spalling resistance of zirconia based ceramic turbine coatings. To accomplish this, two baseline and 30 candidate duplex (layered MCrAlY/zirconia based ceramic) coatings were iteratively evaluated microstructurally and in four series of laboratory burner rig tests. This led to the selection of two candidate optimized 0.25 mm (0.010 inch) thick plasma sprayed partially stabilized zirconia ceramics containing six weight percent yttria and applied with two different sets of process parameters over a 0.13 mm (0.005 inch) thick low pressure chamber sprayed MCrAlY bond coat. Both of these coatings demonstrated at least 3X laboratory cyclic spalling life improvement over the baseline systems, as well as cyclic oxidation life equivalent to 15,000 commercial engine flight hours.
Automated Reconstruction of Neural Trees Using Front Re-initialization
Mukherjee, Amit; Stepanyants, Armen
2013-01-01
This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539
Implementation of an improved adaptive-implicit method in a thermal compositional simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, T.B.
1988-11-01
A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less
Iterative projection algorithms for ab initio phasing in virus crystallography.
Lo, Victor L; Kingston, Richard L; Millane, Rick P
2016-12-01
Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.
Further investigation on "A multiplicative regularization for force reconstruction"
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.
Status of the 1 MeV Accelerator Design for ITER NBI
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Boilson, D.; Hemsworth, R.; Svensson, L.; Graceffa, J.; Schunke, B.; Decamps, H.; Tanaka, M.; Bonicelli, T.; Masiello, A.; Bigi, M.; Chitarin, G.; Luchetta, A.; Marcuzzi, D.; Pasqualotto, R.; Pomaro, N.; Serianni, G.; Sonato, P.; Toigo, V.; Zaccaria, P.; Kraus, W.; Franzen, P.; Heinemann, B.; Inoue, T.; Watanabe, K.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; De Esch, H.
2011-09-01
The beam source of neutral beam heating/current drive system for ITER is needed to accelerate the negative ion beam of 40A with D- at 1 MeV for 3600 sec. In order to realize the beam source, design and R&D works are being developed in many institutions under the coordination of ITER organization. The development of the key issues of the ion source including source plasma uniformity, suppression of co-extracted electron in D beam operation and also after the long beam duration time of over a few 100 sec, is progressed mainly in IPP with the facilities of BATMAN, MANITU and RADI. In the near future, ELISE, that will be tested the half size of the ITER ion source, will start the operation in 2011, and then SPIDER, which demonstrates negative ion production and extraction with the same size and same structure as the ITER ion source, will start the operation in 2014 as part of the NBTF. The development of the accelerator is progressed mainly in JAEA with the MeV test facility, and also the computer simulation of beam optics also developed in JAEA, CEA and RFX. The full ITER heating and current drive beam performance will be demonstrated in MITICA, which will start operation in 2016 as part of the NBTF.
Iterative Neighbour-Information Gathering for Ranking Nodes in Complex Networks
NASA Astrophysics Data System (ADS)
Xu, Shuang; Wang, Pei; Lü, Jinhu
2017-01-01
Designing node influence ranking algorithms can provide insights into network dynamics, functions and structures. Increasingly evidences reveal that node’s spreading ability largely depends on its neighbours. We introduce an iterative neighbourinformation gathering (Ing) process with three parameters, including a transformation matrix, a priori information and an iteration time. The Ing process iteratively combines priori information from neighbours via the transformation matrix, and iteratively assigns an Ing score to each node to evaluate its influence. The algorithm appropriates for any types of networks, and includes some traditional centralities as special cases, such as degree, semi-local, LeaderRank. The Ing process converges in strongly connected networks with speed relying on the first two largest eigenvalues of the transformation matrix. Interestingly, the eigenvector centrality corresponds to a limit case of the algorithm. By comparing with eight renowned centralities, simulations of susceptible-infected-removed (SIR) model on real-world networks reveal that the Ing can offer more exact rankings, even without a priori information. We also observe that an optimal iteration time is always in existence to realize best characterizing of node influence. The proposed algorithms bridge the gaps among some existing measures, and may have potential applications in infectious disease control, designing of optimal information spreading strategies.
How do IMGs compare with Canadian medical school graduates in a family practice residency program?
Andrew, Rodney F.
2010-01-01
ABSTRACT OBJECTIVE To compare international medical graduates (IMGs) with Canadian medical school graduates in a family practice residency program. DESIGN Analysis of the results of the in-training evaluation reports (ITERs) and the Certification in Family Medicine (CCFP) examination results for 2 cohorts of IMGs and Canadian-trained graduates between the years 2006 and 2008. SETTING St Paul’s Hospital (SPH) in Vancouver, BC, a training site of the University of British Columbia (UBC) Family Practice Residency Program. PARTICIPANTS In-training evaluation reports were examined for 12 first-year and 9 second-year Canadian-trained residents at the SPH site, and 12 first-year and 12 second-year IMG residents at the IMG site at SPH; CCFP examination results were reviewed for all UBC family practice residents who took the May 2008 examination and disclosed their results. MAIN OUTCOME MEASURES Pass or fail rates on the CCFP examination; proportions of evaluations in each group of residents given each of the following designations: exceeds expectations, meets expectations, or needs improvement. The May 2008 CCFP examination results were reviewed. RESULTS Compared with the second-year IMGs, the second-year SPH Canadian-trained residents had a greater proportion of exceeds expectations designations than the IMGs. For the first-year residents, both the SPH Canadian graduates and IMGs had similar results in all 3 categories. Combining the results of the 2 cohorts, the Canadian-trained residents had 310 (99%) ITERs that were designated as either exceeds expectations or meets expectations, and only 3 (1%) ITERs were in the needs improvement category. The IMG results were 362 (97.6%) ITERs in the exceeds expectations or meets expectations categories; 9 (2%) were in the needs improvement category. Statistically these are not significant differences. Seven of the 12 (58%) IMG candidates passed the CCFP examination compared with 59 of 62 (95%) of the UBC family practice residents. CONCLUSION The IMG residents compared favourably with their Canadian-trained colleagues when comparing ITERs but not in passing the CCFP examination. Further research is needed to elucidate these results. PMID:20841570
NASA Astrophysics Data System (ADS)
Iotti, Robert
2015-04-01
ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success been present at the beginning, ITER would be in far better shape. As is, it can provide good lessons to avoid the same problems in the future. The ITER Council is now applying those lessons. A very experienced new Director General has just been appointed. He has instituted a number of drastic changes, but still within the governance of the JIA. Will there changes be effective? Only time will tell, but I am optimistic.
A Parallel Fast Sweeping Method for the Eikonal Equation
NASA Astrophysics Data System (ADS)
Baker, B.
2017-12-01
Recently, there has been an exciting emergence of probabilistic methods for travel time tomography. Unlike gradient-based optimization strategies, probabilistic tomographic methods are resistant to becoming trapped in a local minimum and provide a much better quantification of parameter resolution than, say, appealing to ray density or performing checkerboard reconstruction tests. The benefits associated with random sampling methods however are only realized by successive computation of predicted travel times in, potentially, strongly heterogeneous media. To this end this abstract is concerned with expediting the solution of the Eikonal equation. While many Eikonal solvers use a fast marching method, the proposed solver will use the iterative fast sweeping method because the eight fixed sweep orderings in each iteration are natural targets for parallelization. To reduce the number of iterations and grid points required the high-accuracy finite difference stencil of Nobel et al., 2014 is implemented. A directed acyclic graph (DAG) is created with a priori knowledge of the sweep ordering and finite different stencil. By performing a topological sort of the DAG sets of independent nodes are identified as candidates for concurrent updating. Additionally, the proposed solver will also address scalability during earthquake relocation, a necessary step in local and regional earthquake tomography and a barrier to extending probabilistic methods from active source to passive source applications, by introducing an asynchronous parallel forward solve phase for all receivers in the network. Synthetic examples using the SEG over-thrust model will be presented.
NASA Astrophysics Data System (ADS)
Federici, Gianfranco; Raffray, A. René
1997-04-01
The transient thermal model RACLETTE (acronym of Rate Analysis Code for pLasma Energy Transfer Transient Evaluation) described in part I of this paper is applied here to analyse the heat transfer and erosion effects of various slow (100 ms-10 s) high power energy transients on the actively cooled plasma facing components (PFCs) of the International Thermonuclear Experimental Reactor (ITER). These have a strong bearing on the PFC design and need careful analysis. The relevant parameters affecting the heat transfer during the plasma excursions are established. The temperature variation with time and space is evaluated together with the extent of vaporisation and melting (the latter only for metals) for the different candidate armour materials considered for the design (i.e., Be for the primary first wall, Be and CFCs for the limiter, Be, W, and CFCs for the divertor plates) and including for certain cases low-density vapour shielding effects. The critical heat flux, the change of the coolant parameters and the possible severe degradation of the coolant heat removal capability that could result under certain conditions during these transients, for example for the limiter, are also evaluated. Based on the results, the design implications on the heat removal performance and erosion damage of the variuos ITER PFCs are critically discussed and some recommendations are made for the selection of the most adequate protection materials and optimum armour thickness.
NASA Astrophysics Data System (ADS)
Qian, J. P.; Garofalo, A. M.; Gong, X. Z.; Ren, Q. L.; Ding, S. Y.; Solomon, W. M.; Xu, G. S.; Grierson, B. A.; Guo, W. F.; Holcomb, C. T.; McClenaghan, J.; McKee, G. R.; Pan, C. K.; Huang, J.; Staebler, G. M.; Wan, B. N.
2017-05-01
Recent EAST/DIII-D joint experiments on the high poloidal beta {β\\text{P}} regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ⩽ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results (Garofalo et al, IAEA 2014, Gong et al 2014 IAEA Int. Conf. on Fusion Energy). Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high {β\\text{P}} discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at {β\\text{N}} ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Results reported in this paper suggest that the DIII-D high {β\\text{P}} scenario could be a candidate for ITER steady state operation.
Plasma cleaning of ITER first mirrors
NASA Astrophysics Data System (ADS)
Moser, L.; Marot, L.; Steiner, R.; Reichle, R.; Leipold, F.; Vorpahl, C.; Le Guern, F.; Walach, U.; Alberti, S.; Furno, I.; Yan, R.; Peng, J.; Ben Yaala, M.; Meyer, E.
2017-12-01
Nuclear fusion is an extremely attractive option for future generations to compete with the strong increase in energy consumption. Proper control of the fusion plasma is mandatory to reach the ambitious objectives set while preserving the machine’s integrity, which requests a large number of plasma diagnostic systems. Due to the large neutron flux expected in the International Thermonuclear Experimental Reactor (ITER), regular windows or fibre optics are unusable and were replaced by so-called metallic first mirrors (FMs) embedded in the neutron shielding, forming an optical labyrinth. Materials eroded from the first wall reactor through physical or chemical sputtering will migrate and will be deposited onto mirrors. Mirrors subject to net deposition will suffer from reflectivity losses due to the deposition of impurities. Cleaning systems of metallic FMs are required in more than 20 optical diagnostic systems in ITER. Plasma cleaning using radio frequency (RF) generated plasmas is currently being considered the most promising in situ cleaning technique. An update of recent results obtained with this technique will be presented. These include the demonstration of cleaning of several deposit types (beryllium, tungsten and beryllium proxy, i.e. aluminium) at 13.56 or 60 MHz as well as large scale cleaning (mirror size: 200 × 300 mm2). Tests under a strong magnetic field up to 3.5 T in laboratory and first experiments of RF plasma cleaning in EAST tokamak will also be discussed. A specific focus will be given on repetitive cleaning experiments performed on several FM material candidates.
NASA Astrophysics Data System (ADS)
Chen, Jiaxi; Li, Junmin
2018-02-01
In this paper, we investigate the perfect consensus problem for second-order linearly parameterised multi-agent systems (MAS) with imprecise communication topology structure. Takagi-Sugeno (T-S) fuzzy models are presented to describe the imprecise communication topology structure of leader-following MAS, and a distributed adaptive iterative learning control protocol is proposed with the dynamic of leader unknown to any of the agent. The proposed protocol guarantees that the follower agents can track the leader perfectly on [0,T] for the consensus problem. Under alignment condition, a sufficient condition of the consensus for closed-loop MAS is given based on Lyapunov stability theory. Finally, a numerical example and a multiple pendulum system are given to illustrate the effectiveness of the proposed algorithm.
A three-dimensional wide-angle BPM for optical waveguide structures.
Ma, Changbao; Van Keuren, Edward
2007-01-22
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
NASA Technical Reports Server (NTRS)
Hamazaki, Takashi
1992-01-01
This paper describes an architecture for realizing high quality production schedules. Although quality is one of the most important aspects of production scheduling, it is difficult, even for a user, to specify precisely. However, it is also true that the decision as to whether a scheduler is good or bad can only be made by the user. This paper proposes the following: (1) the quality of a schedule can be represented in the form of quality factors, i.e. constraints and objectives of the domain, and their structure; (2) quality factors and their structure can be used for decision making at local decision points during the scheduling process; and (3) that they can be defined via iteration of user specification processes.
Numerical solution of quadratic matrix equations for free vibration analysis of structures
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
A three-dimensional wide-angle BPM for optical waveguide structures
NASA Astrophysics Data System (ADS)
Ma, Changbao; van Keuren, Edward
2007-01-01
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
Assays for the Identification and Prioritization of Drug Candidates for Spinal Muscular Atrophy
Cherry, Jonathan J.; Kobayashi, Dione T.; Lynes, Maureen M.; Naryshkin, Nikolai N.; Tiziano, Francesco Danilo; Zaworski, Phillip G.; Rubin, Lee L.
2014-01-01
Abstract Spinal muscular atrophy (SMA) is an autosomal recessive genetic disorder resulting in degeneration of α-motor neurons of the anterior horn and proximal muscle weakness. It is the leading cause of genetic mortality in children younger than 2 years. It affects ∼1 in 11,000 live births. In 95% of cases, SMA is caused by homozygous deletion of the SMN1 gene. In addition, all patients possess at least one copy of an almost identical gene called SMN2. A single point mutation in exon 7 of the SMN2 gene results in the production of low levels of full-length survival of motor neuron (SMN) protein at amounts insufficient to compensate for the loss of the SMN1 gene. Although no drug treatments are available for SMA, a number of drug discovery and development programs are ongoing, with several currently in clinical trials. This review describes the assays used to identify candidate drugs for SMA that modulate SMN2 gene expression by various means. Specifically, it discusses the use of high-throughput screening to identify candidate molecules from primary screens, as well as the technical aspects of a number of widely used secondary assays to assess SMN messenger ribonucleic acid (mRNA) and protein expression, localization, and function. Finally, it describes the process of iterative drug optimization utilized during preclinical SMA drug development to identify clinical candidates for testing in human clinical trials. PMID:25147906
Advances in Global Adjoint Tomography - Data Assimilation and Inversion Strategy
NASA Astrophysics Data System (ADS)
Ruan, Y.; Lei, W.; Lefebvre, M. P.; Modrak, R. T.; Smith, J. A.; Bozdag, E.; Tromp, J.
2016-12-01
Seismic tomography provides the most direct way to understand Earth's interior by imaging elastic heterogeneity, anisotropy and anelasticity. Resolving thefine structure of these properties requires accurate simulations of seismic wave propagation in complex 3-D Earth models. On the supercomputer "Titan" at Oak Ridge National Laboratory, we are employing a spectral-element method (Komatitsch & Tromp 1999, 2002) in combination with an adjoint method (Tromp et al., 2005) to accurately calculate theoretical seismograms and Frechet derivatives. Using 253 carefully selected events, Bozdag et al. (2016) iteratively determined a transversely isotropic earth model (GLAD_M15) using 15 preconditioned conjugate-gradient iterations. To obtain higher resolution images of the mantle, we have expanded our database to more than 4,220 Mw5.0-7.0 events occurred between 1995 and 2014. Instead of using the entire database all at once, we choose to draw subsets of about 1,000 events from our database for each iteration to achieve a faster convergence rate with limited computing resources. To provide good coverage of deep structures, we selected approximately 700 deep and intermedia earthquakes and 300 shallow events to start a new iteration. We reinverted the CMT solutions of these events in the latest model, and recalculated synthetic seismograms. Using the synthetics as reference seismograms, we selected time windows that show good agreement with data and make measurements within the windows. From the measurements we further assess the overall quality of each event and station, and exclude bad measurements base upon certain criteria. So far, with very conservative criteria, we have assimilated more than 8.0 million windows from 1,000 earthquakes in three period bands for the new iteration. For subsequent iterations, we will change the period bands and window selecting criteria to include more window. In the inversion, dense array data (e.g., USArray) usually dominate model updates. In order to better handle this issue, we introduced weighting of stations and events based upon their relative distance and showed that the contribution from dense array is better balanced in the Frechet derivatives. We will present a summary of this form of data assimilation and preliminary results of the first few iterations.
A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆
Ying, Wenjun; Henriquez, Craig S.
2013-01-01
This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600
Xu, J C; Wang, L; Xu, G S; Luo, G N; Yao, D M; Li, Q; Cao, L; Chen, L; Zhang, W; Liu, S C; Wang, H Q; Jia, M N; Feng, W; Deng, G Z; Hu, L Q; Wan, B N; Li, J; Sun, Y W; Guo, H Y
2016-08-01
In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triple probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J. C.; Jia, M. N.; Feng, W.
2016-08-15
In order to withstand rapid increase in particle and power impact onto the divertor and demonstrate the feasibility of the ITER design under long pulse operation, the upper divertor of the EAST tokamak has been upgraded to actively water-cooled, ITER-like tungsten mono-block structure since the 2014 campaign, which is the first attempt for ITER on the tokamak devices. Therefore, a new divertor Langmuir probe diagnostic system (DivLP) was designed and successfully upgraded on the tungsten divertor to obtain the plasma parameters in the divertor region such as electron temperature, electron density, particle and heat fluxes. More specifically, two identical triplemore » probe arrays have been installed at two ports of different toroidal positions (112.5-deg separated toroidally), which can provide fundamental data to study the toroidal asymmetry of divertor power deposition and related 3-dimension (3D) physics, as induced by resonant magnetic perturbations, lower hybrid wave, and so on. The shape of graphite tip and fixed structure of the probe are designed according to the structure of the upper tungsten divertor. The ceramic support, small graphite tip, and proper connector installed make it possible to be successfully installed in the very narrow interval between the cassette body and tungsten mono-block, i.e., 13.5 mm. It was demonstrated during the 2014 and 2015 commissioning campaigns that the newly upgraded divertor Langmuir probe diagnostic system is successful. Representative experimental data are given and discussed for the DivLP measurements, then proving its availability and reliability.« less
Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...
2016-10-21
The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less
Shock and vibration response of multistage structure
NASA Technical Reports Server (NTRS)
Lee, S. Y.; Liyeos, J. G.; Tang, S. S.
1968-01-01
Study of the shock and vibration response of a multistage structure employed analytically, lumped-mass, continuous-beam, multimode, and matrix-iteration methods. The study was made on the load paths, transmissibility, and attenuation properties along a longitudinal axis of a long, slender structure with increasing degree of complexity.
A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction
2010-08-09
more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several
Experimental and numerical investigation of HyperVapotron heat transfer
NASA Astrophysics Data System (ADS)
Wang, Weihua; Deng, Haifei; Huang, Shenghong; Chu, Delin; Yang, Bin; Mei, Luoqin; Pan, Baoguo
2014-12-01
The divertor first wall and neutral beam injection (NBI) components of tokamak devices require high heat flux removal up to 20-30 MW m-2 for future fusion reactors. The water cooled HyperVapotron (HV) structure, which relies on internal grooves or fins and boiling heat transfer to maximize the heat transfer capability, is the most promising candidate. The HV devices, that are able to transfer large amounts of heat (1-20 MW m-2) efficiently, have therefore been developed specifically for this application. Until recently, there have been few attempts to observe the detailed bubble characteristics and vortex evolvement of coolant flowing inside their various parts and understand of the internal two-phase complex heat transfer mechanism behind the vapotron effect. This research builds the experimental facilities of HyperVapotron Loop-I (HVL-I) and Pressure Water HyperVapotron Loop-II (PWHL-II) to implement the subcooled boiling principle experiment in terms of typical flow parameters, geometrical parameters of test section and surface heat flux, which are similar to those of the ITER-like first wall and NBI components (EAST and MAST). The multiphase flow and heat transfer phenomena on the surface of grooves and triangular fins when the subcooled water flowed through were observed and measured with the planar laser induced fluorescence (PLIF) and high-speed photography (HSP) techniques. Particle image velocimetry (PIV) was selected to reveal vortex formation, the flow structure that promotes the vapotron effect during subcooled boiling. The coolant flow data for contributing to the understanding of the vapotron phenomenon and the assessment of how the design and operational conditions that might affect the thermal performance of the devices were collected and analysed. The subcooled flow boiling model and methods of HV heat transfer adopted in the considered computational fluid dynamics (CFD) code were evaluated by comparing the calculated wall temperatures with the experimentally measured values. It was discovered that the bubble and vortex characteristics in the HV are clearly heavily dependent on the internal geometry, flow conditions and input heat flux. The evaporation latent heat is the primary heat transfer mechanism of HV flow under the condition of high heat flux, and the heat transfer through convection is very limited. The percentage of wall heat flux going into vapour production is almost 70%. These relationships between the flow phenomena and thermal performance of the HV device are essential to study the mechanisms for the flow structure alterations for design optimization and improvements of the ITER-like devices' water cooling structure and plasma facing components for future fusion reactors.
A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles
2009-01-01
In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.
NASA Astrophysics Data System (ADS)
Jenuwine, Natalia M.; Mahesh, Sunny N.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Early detection of lung nodules from CT scans is key to improving lung cancer treatment, but poses a significant challenge for radiologists due to the high throughput required of them. Computer-Aided Detection (CADe) systems aim to automatically detect these nodules with computer algorithms, thus improving diagnosis. These systems typically use a candidate selection step, which identifies all objects that resemble nodules, followed by a machine learning classifier which separates true nodules from false positives. We create a CADe system that uses a 3D convolutional neural network (CNN) to detect nodules in CT scans without a candidate selection step. Using data from the LIDC database, we train a 3D CNN to analyze subvolumes from anywhere within a CT scan and output the probability that each subvolume contains a nodule. Once trained, we apply our CNN to detect nodules from entire scans, by systematically dividing the scan into overlapping subvolumes which we input into the CNN to obtain the corresponding probabilities. By enabling our network to process an entire scan, we expect to streamline the detection process while maintaining its effectiveness. Our results imply that with continued training using an iterative training scheme, the one-step approach has the potential to be highly effective.
Breeding novel solutions in the brain: a model of Darwinian neurodynamics.
Szilágyi, András; Zachar, István; Fedor, Anna; de Vladar, Harold P; Szathmáry, Eörs
2016-01-01
Background : The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods : We combine known components of the brain - recurrent neural networks (acting as attractors), the action selection loop and implicit working memory - to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results : We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions : Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.
Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You
2018-05-18
RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.
ERIC Educational Resources Information Center
Türkkan, Ercan
2017-01-01
The aim of this study is to investigate the cognitive structures of physics teacher candidates about "electric field." Phenomenographic research method, one of the qualitative research patterns, was used in the study. The data of the study was collected from 91 physics teacher candidates who had taken General Physics II course at…
ERIC Educational Resources Information Center
Erdogan, Ahmet
2017-01-01
The purpose of this research is to determine mathematics teacher candidates' conceptual structures about the concept of "measurement" that is the one of the important learning fields of mathematics. Qualitative research method was used in this study. Participants of this study were 58 mathematics teacher candidates studying in one of the…
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Nonlinear random response prediction using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.
1993-01-01
An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.
ITER structural design criteria and their extension to advanced reactor blankets*1
NASA Astrophysics Data System (ADS)
Majumdar, S.; Kalinin, G.
2000-12-01
Applications of the recent ITER structural design criteria (ISDC) are illustrated by two components. First, the low-temperature-design rules are applied to copper alloys that are particularly prone to irradiation embrittlement at relatively low fluences at certain temperatures. Allowable stresses are derived and the impact of the embrittlement on allowable surface heat flux of a simple first-wall/limiter design is demonstrated. Next, the high-temperature-design rules of ISDC are applied to evaporation of lithium and vapor extraction (EVOLVE), a blanket design concept currently being investigated under the US Advanced Power Extraction (APEX) program. A single tungsten first-wall tube is considered for thermal and stress analyses by finite-element method.
Iterative refinement of structure-based sequence alignments by Seed Extension
Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook
2009-01-01
Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133
Simulation Studies of the Dielectric Grating as an Accelerating and Focusing Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soong, Ken; Peralta, E.A.; Byer, R.L.
A grating-based design is a promising candidate for a laser-driven dielectric accelerator. Through simulations, we show the merits of a readily fabricated grating structure as an accelerating component. Additionally, we show that with a small design perturbation, the accelerating component can be converted into a focusing structure. The understanding of these two components is critical in the successful development of any complete accelerator. The concept of accelerating electrons with the tremendous electric fields found in lasers has been proposed for decades. However, until recently the realization of such an accelerator was not technologically feasible. Recent advances in the semiconductor industry,more » as well as advances in laser technology, have now made laser-driven dielectric accelerators imminent. The grating-based accelerator is one proposed design for a dielectric laser-driven accelerator. This design, which was introduced by Plettner, consists of a pair of opposing transparent binary gratings, illustrated in Fig. 1. The teeth of the gratings serve as a phase mask, ensuring a phase synchronicity between the electromagnetic field and the moving particles. The current grating accelerator design has the drive laser incident perpendicular to the substrate, which poses a laser-structure alignment complication. The next iteration of grating structure fabrication seeks to monolithically create an array of grating structures by etching the grating's vacuum channel into a fused silica wafer. With this method it is possible to have the drive laser confined to the plane of the wafer, thus ensuring alignment of the laser-and-structure, the two grating halves, and subsequent accelerator components. There has been previous work using 2-dimensional finite difference time domain (2D-FDTD) calculations to evaluate the performance of the grating accelerator structure. However, this work approximates the grating as an infinite structure and does not accurately model a realizable structure. In this paper, we will present a 3-dimensional frequency-domain simulation of both the infinite and the finite grating accelerator structure. Additionally, we will present a new scheme for a focusing structure based on a perturbation of the accelerating structure. We will present simulations of this proposed focusing structure and quantify the quality of the focusing fields.« less
Block Iterative Methods for Elliptic and Parabolic Difference Equations.
1981-09-01
S V PARTER, M STEUERWALT N0OO14-7A-C-0341 UNCLASSIFIED CSTR -447 NL ENN.EEEEEN LLf SCOMPUTER SCIENCES c~DEPARTMENT SUniversity of Wisconsin- SMadison...suggests that iterative algorithms that solve for several points at once will converge more rapidly than point algorithms . The Gaussian elimination... algorithm is seen in this light to converge in one step. Frankel [14], Young [34], Arms, Gates, and Zondek [1], and Varga [32], using the algebraic structure
2003-06-01
delivery Data Access (1980s) "What were unit sales in New England last March?" Relational databases (RDBMS), Structured Query Language ( SQL ...macros written in Visual Basic for Applications ( VBA ). 32 Iteration Two: Class Diagram Tech OASIS Export ScriptImport Filter Data ProcessingMethod 1...MS Excel * 1 VBA Macro*1 contains sends data to co nt ai ns executes * * 1 1 contains contains Figure 20. Iteration two class diagram The
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-06-01
We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.
Wójcik-Gargula, A; Tracz, G; Scholz, M
2017-12-13
This work presents results of the calculations performed in order to predict the neutron-induced activity in structural materials that are considered to be using at the TPR spectrometer-one of the detection system of the High-Resolution Neutron Spectrometer for ITER. An attempt has been made to estimate the shutdown dose rates in a Cuboid #1 and to check if they satisfy ICRP regulatory requirements for occupational exposure to radiation and ITER nuclear safety regulations for areas with personal access. The results were obtained by the MCNP and FISPACT-II calculations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Towards plasma cleaning of ITER first mirrors
NASA Astrophysics Data System (ADS)
Moser, L.; Marot, L.; Eren, B.; Steiner, R.; Mathys, D.; Leipold, F.; Reichle, R.; Meyer, E.
2015-06-01
To avoid reflectivity losses in ITER's optical diagnostic systems, on-site cleaning of metallic first mirrors via plasma sputtering is foreseen to remove deposit build-ups migrating from the main wall. In this work, the influence of aluminium and tungsten deposits on the reflectivity of molybdenum mirrors as well as the possibility to clean them with plasma exposure is investigated. Porous ITER-like deposits are grown to mimic the edge conditions expected in ITER, and a severe degradation in the specular reflectivity is observed as these deposits build up on the mirror surface. In addition, dense oxide films are produced for comparisons with porous films. The composition, morphology and crystal structure of several films were characterized by means of scanning electron microscopy, x-ray photoelectron spectroscopy, x-ray diffraction and secondary ion mass spectrometry. The cleaning of the deposits and the restoration of the mirrors' optical properties are possible either with a Kaufman source or radio frequency directly applied to the mirror (or radio frequency plasma generated directly around the mirror surface). Accelerating ions of an external plasma source through a direct current applied onto the mirror does not remove deposits composed of oxides. A possible implementation of plasma cleaning in ITER is addressed.
Transfer Learning to Accelerate Interface Structure Searches
NASA Astrophysics Data System (ADS)
Oda, Hiromi; Kiyohara, Shin; Tsuda, Koji; Mizoguchi, Teruyasu
2017-12-01
Interfaces have atomic structures that are significantly different from those in the bulk, and play crucial roles in material properties. The central structures at the interfaces that provide properties have been extensively investigated. However, determination of even one interface structure requires searching for the stable configuration among many thousands of candidates. Here, a powerful combination of machine learning techniques based on kriging and transfer learning (TL) is proposed as a method for unveiling the interface structures. Using the kriging+TL method, thirty-three grain boundaries were systematically determined from 1,650,660 candidates in only 462 calculations, representing an increase in efficiency over conventional all-candidate calculation methods, by a factor of approximately 3,600.
Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E
2018-06-12
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Engineering aspects of design and integration of ECE diagnostic in ITER
Udintsev, V. S.; Taylor, G.; Pandya, H. K.B.; ...
2015-03-12
ITER ECE diagnostic [1] needs not only to meet measurement requirements, but also to withstand various loads, such as electromagnetic, mechanical, neutronic and thermal, and to be protected from stray ECH radiation at 170 GHz and other millimeter wave emission, like Collective Thomson scattering which is planned to operate at 60 GHz. Same or similar loads will be applied to other millimetre-wave diagnostics [2], located both in-vessel and in-port plugs. These loads must be taken into account throughout the design phases of the ECE and other microwave diagnostics to ensure their structural integrity and maintainability. The integration of microwave diagnosticsmore » with other ITER systems is another challenging activity which is currently ongoing through port integration and in-vessel integration work. Port Integration has to address the maintenance and the safety aspects of diagnostics, too. Engineering solutions which are being developed to support and to operate ITER ECE diagnostic, whilst complying with safety and maintenance requirements, are discussed in this paper.« less
Toward Generalization of Iterative Small Molecule Synthesis
Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.
2018-01-01
Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
NASA Astrophysics Data System (ADS)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
Extending substructure based iterative solvers to multiple load and repeated analyses
NASA Technical Reports Server (NTRS)
Farhat, Charbel
1993-01-01
Direct solvers currently dominate commercial finite element structural software, but do not scale well in the fine granularity regime targeted by emerging parallel processors. Substructure based iterative solvers--often called also domain decomposition algorithms--lend themselves better to parallel processing, but must overcome several obstacles before earning their place in general purpose structural analysis programs. One such obstacle is the solution of systems with many or repeated right hand sides. Such systems arise, for example, in multiple load static analyses and in implicit linear dynamics computations. Direct solvers are well-suited for these problems because after the system matrix has been factored, the multiple or repeated solutions can be obtained through relatively inexpensive forward and backward substitutions. On the other hand, iterative solvers in general are ill-suited for these problems because they often must restart from scratch for every different right hand side. In this paper, we present a methodology for extending the range of applications of domain decomposition methods to problems with multiple or repeated right hand sides. Basically, we formulate the overall problem as a series of minimization problems over K-orthogonal and supplementary subspaces, and tailor the preconditioned conjugate gradient algorithm to solve them efficiently. The resulting solution method is scalable, whereas direct factorization schemes and forward and backward substitution algorithms are not. We illustrate the proposed methodology with the solution of static and dynamic structural problems, and highlight its potential to outperform forward and backward substitutions on parallel computers. As an example, we show that for a linear structural dynamics problem with 11640 degrees of freedom, every time-step beyond time-step 15 is solved in a single iteration and consumes 1.0 second on a 32 processor iPSC-860 system; for the same problem and the same parallel processor, a pair of forward/backward substitutions at each step consumes 15.0 seconds.
Yasaka, Koichiro; Kamiya, Kouhei; Irie, Ryusuke; Maeda, Eriko; Sato, Jiro; Ohtomo, Kuni
To compare the differences in metal artefact degree and the depiction of structures in helical neck CT, in patients with metallic dental fillings, among adaptive iterative dose reduction three dimensional (AIDR 3D), forward-projected model-based iterative reconstruction solution (FIRST) and AIDR 3D with single-energy metal artefact reduction (SEMAR-A). In this retrospective clinical study, 22 patients (males, 13; females, 9; mean age, 64.6 ± 12.6 years) with metallic dental fillings who underwent contrast-enhanced helical CT involving the oropharyngeal region were included. Neck axial images were reconstructed with AIDR 3D, FIRST and SEMAR-A. Metal artefact degree and depiction of structures (the apex and root of the tongue, parapharyngeal space, superior portion of the internal jugular chain and parotid gland) were evaluated on a four-point scale by two radiologists. Placing regions of interest, standard deviations of the oral cavity and nuchal muscle (at the slice where no metal exists) were measured and metal artefact indices were calculated (the square root of the difference of the squares of them). In SEMAR-A, metal artefact was significantly reduced and depictions of all structures were significantly improved compared with those in FIRST and AIDR 3D (p ≤ 0.001, sign test). Metal artefact index for the oral cavity in AIDR 3D/FIRST/SEMAR-A was 572.0/477.7/88.4, and significant differences were seen between each reconstruction algorithm (p < 0.0001, Wilcoxon signed-rank test). SEMAR-A could provide images with lesser metal artefact and better depiction of structures than AIDR 3D and FIRST.
A new iterative triclass thresholding technique in image segmentation.
Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin
2014-03-01
We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.
Applications of artificial neural nets in structural mechanics
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hajela, Prabhat
1990-01-01
A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.
Applications of artificial neural nets in structural mechanics
NASA Technical Reports Server (NTRS)
Berke, L.; Hajela, P.
1992-01-01
A brief introduction to the fundamental of Neural Nets is given, followed by two applications in structural optimization. In the first case, the feasibility of simulating with neural nets the many structural analyses performed during optimization iterations was studied. In the second case, the concept of using neural nets to capture design expertise was studied.
NASA Astrophysics Data System (ADS)
Zhu, H.; Bozdag, E.; Peter, D. B.; Tromp, J.
2010-12-01
We use spectral-element and adjoint methods to image crustal and upper mantle heterogeneity in Europe. The study area involves the convergent boundaries of the Eurasian, African and Arabian plates and the divergent boundary between the Eurasian and North American plates, making the tectonic structure of this region complex. Our goal is to iteratively fit observed seismograms and improve crustal and upper mantle images by taking advantage of 3D forward and inverse modeling techniques. We use data from 200 earthquakes with magnitudes between 5 and 6 recorded by 262 stations provided by ORFEUS. Crustal model Crust2.0 combined with mantle model S362ANI comprise the initial 3D model. Before the iterative adjoint inversion, we determine earthquake source parameters in the initial 3D model by using 3D Green functions and their Fréchet derivatives with respect to the source parameters (i.e., centroid moment tensor and location). The updated catalog is used in the subsequent structural inversion. Since we concentrate on upper mantle structures which involve anisotropy, transversely isotropic (frequency-dependent) traveltime sensitivity kernels are used in the iterative inversion. Taking advantage of the adjoint method, we use as many measurements as can obtain based on comparisons between observed and synthetic seismograms. FLEXWIN (Maggi et al., 2009) is used to automatically select measurement windows which are analyzed based on a multitaper technique. The bandpass ranges from 15 second to 150 second. Long-period surface waves and short-period body waves are combined in source relocations and structural inversions. A statistical assessments of traveltime anomalies and logarithmic waveform differences is used to characterize the inverted sources and structure.
Triple/quadruple patterning layout decomposition via novel linear programming and iterative rounding
NASA Astrophysics Data System (ADS)
Lin, Yibo; Xu, Xiaoqing; Yu, Bei; Baldick, Ross; Pan, David Z.
2016-03-01
As feature size of the semiconductor technology scales down to 10nm and beyond, multiple patterning lithography (MPL) has become one of the most practical candidates for lithography, along with other emerging technologies such as extreme ultraviolet lithography (EUVL), e-beam lithography (EBL) and directed self assembly (DSA). Due to the delay of EUVL and EBL, triple and even quadruple patterning are considered to be used for lower metal and contact layers with tight pitches. In the process of MPL, layout decomposition is the key design stage, where a layout is split into various parts and each part is manufactured through a separate mask. For metal layers, stitching may be allowed to resolve conflicts, while it is forbidden for contact and via layers. In this paper, we focus on the application of layout decomposition where stitching is not allowed such as for contact and via layers. We propose a linear programming and iterative rounding (LPIR) solving technique to reduce the number of non-integers in the LP relaxation problem. Experimental results show that the proposed algorithms can provide high quality decomposition solutions efficiently while introducing as few conflicts as possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Jinping P.; Garofalo, Andrea M.; Gong, Xianzu Z.
Recent EAST/DIII-D joint experiments on the high poloidal betamore » $${{\\beta}_{\\text{P}}}$$ regime in DIII-D have extended operation with internal transport barriers (ITBs) and excellent energy confinement (H 98y2 ~ 1.6) to higher plasma current, for lower q 95 ≤ 7.0, and more balanced neutral beam injection (NBI) (torque injection < 2 Nm), for lower plasma rotation than previous results. Transport analysis and experimental measurements at low toroidal rotation suggest that the E × B shear effect is not key to the ITB formation in these high $${{\\beta}_{\\text{P}}}$$ discharges. Experiments and TGLF modeling show that the Shafranov shift has a key stabilizing effect on turbulence. Extrapolation of the DIII-D results using a 0D model shows that with the improved confinement, the high bootstrap fraction regime could achieve fusion gain Q = 5 in ITER at $${{\\beta}_{\\text{N}}}$$ ~ 2.9 and q 95 ~ 7. With the optimization of q(0), the required improved confinement is achievable when using 1.5D TGLF-SAT1 for transport simulations. Furthermore, results reported in this paper suggest that the DIII-D high $${{\\beta}_{\\text{P}}}$$ scenario could be a candidate for ITER steady state operation.« less
Pre-irradiation testing of actively cooled Be-Cu divertor modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linke, J.; Duwe, R.; Kuehnlein, W.
1995-09-01
A set of neutron irradiation tests is prepared on different plasma facing materials (PFM) candidates and miniaturized components for ITER. Beside beryllium the irradiation program which will be performed in the High Flux Reactor (HFR) in Petten, includes different carbon fiber composites (CFQ) and tungsten alloys. The target values for the neutron irradiation will be 0.5 dpa at temperatures of 350{degrees}C and 700{degrees}C, resp.. The post irradiation examination (PIE) will cover a wide range of mechanical tests; in addition the degradation of thermal conductivity will be investigated. To determine the high heat flux (HHF) performance of actively cooled divertor modules,more » electron beam tests which simulate the expected heat loads during the operation of ITER, are scheduled in the hot cell electron beam facility JUDITH. These tests on a selection of different actively cooled beryllium-copper and CFC-copper divertor modules are performed before and after neutron irradiation; the pre-irradiation testing is an essential part of the program to quantify the zero-fluence high heat flux performance and to detect defects in the modules, in particular in the brazed joints.« less
A 3D Laser Profiling System for Rail Surface Defect Detection
Li, Qingquan; Mao, Qingzhou; Zou, Qin
2017-01-01
Rail surface defects such as the abrasion, scratch and peeling often cause damages to the train wheels and rail bearings. An efficient and accurate detection of rail defects is of vital importance for the safety of railway transportation. In the past few decades, automatic rail defect detection has been studied; however, most developed methods use optic-imaging techniques to collect the rail surface data and are still suffering from a high false recognition rate. In this paper, a novel 3D laser profiling system (3D-LPS) is proposed, which integrates a laser scanner, odometer, inertial measurement unit (IMU) and global position system (GPS) to capture the rail surface profile data. For automatic defect detection, first, the deviation between the measured profile and a standard rail model profile is computed for each laser-imaging profile, and the points with large deviations are marked as candidate defect points. Specifically, an adaptive iterative closest point (AICP) algorithm is proposed to register the point sets of the measured profile with the standard rail model profile, and the registration precision is improved to the sub-millimeter level. Second, all of the measured profiles are combined together to form the rail surface through a high-precision positioning process with the IMU, odometer and GPS data. Third, the candidate defect points are merged into candidate defect regions using the K-means clustering. At last, the candidate defect regions are classified by a decision tree classifier. Experimental results demonstrate the effectiveness of the proposed laser-profiling system in rail surface defect detection and classification. PMID:28777323
Correction of spin diffusion during iterative automated NOE assignment
NASA Astrophysics Data System (ADS)
Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael
2004-04-01
Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.
Exploiting parallel computing with limited program changes using a network of microcomputers
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.
1985-01-01
Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.
Sartori, E; Pavei, M; Marcuzzi, D; Zaccaria, P
2014-02-01
The beam formation and acceleration of the ITER neutral beam injector will be studied in the full-scale ion source, Source for Production of Ions of Deuterium Extracted from a RF plasma (SPIDER). It will be able to sustain 40 A deuterium ion beam during 1-h pulses. The operating conditions of its multi-aperture electrodes will diverge from ideality, as a consequence of inhomogeneous heating and thermally induced deformations in the support structure of the extraction and acceleration grids, which operate at different temperatures. Meeting the requirements on the aperture alignment and distance between the grids with such a large number of apertures (1280) and the huge support structures constitute a challenge. Examination of the structure thermal deformation in transient and steady conditions has been carried out, evaluating their effect on the beam performance: the paper describes the analyses and the solutions proposed to mitigate detrimental effects.
Present limits and improvements of structural materials for fusion reactors - a review
NASA Astrophysics Data System (ADS)
Tavassoli, A.-A. F.
2002-04-01
Since the transition from ITER or DEMO to a commercial power reactor would involve a significant change in system and materials options, a parallel R&D path has been put in place in Europe to address these issues. This paper assesses the structural materials part of this program along with the latest R&D results from the main programs. It is shown that stainless steels and ferritic/martensitic steels, retained for ITER and DEMO, will also remain the principal contenders for the future FPR, despite uncertainties over irradiation induced embrittlement at low temperatures and consequences of high He/dpa ratio. Neither one of the present advanced high temperature materials has to this date the structural integrity reliability needed for application in critical components. This situation is unlikely to change with the materials R&D alone and has to be mitigated in close collaboration with blanket system design.
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram (Inventor); Chowdhury, Indranil (Inventor)
2011-01-01
An approach that efficiently solves for a desired parameter of a system or device that can include both electrically large fast multipole method (FMM) elements, and electrically small QR elements. The system or device is setup as an oct-tree structure that can include regions of both the FMM type and the QR type. An iterative solver is then used to determine a first matrix vector product for any electrically large elements, and a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large elements and the electrically small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter.
Conceptual design and structural analysis for an 8.4-m telescope
NASA Astrophysics Data System (ADS)
Mendoza, Manuel; Farah, Alejandro; Ruiz Schneider, Elfego
2004-09-01
This paper describes the conceptual design of the optics support structures of a telescope with a primary mirror of 8.4 m, the same size as a Large Binocular Telescope (LBT) primary mirror. The design goal is to achieve a structure for supporting the primary and secondary mirrors and keeping them joined as rigid as possible. With this purpose an optimization with several models was done. This iterative design process includes: specifications development, concepts generation and evaluation. Process included Finite Element Analysis (FEA) as well as other analytical calculations. Quality Function Deployment (QFD) matrix was used to obtain telescope tube and spider specifications. Eight spiders and eleven tubes geometric concepts were proposed. They were compared in decision matrixes using performance indicators and parameters. Tubes and spiders went under an iterative optimization process. The best tubes and spiders concepts were assembled together. All assemblies were compared and ranked according to their performance.
Field tests of a participatory ergonomics toolkit for Total Worker Health
Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert
2018-01-01
Growing interest in Total Worker Health® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and team-work skills of participants. PMID:28166897
NASA Astrophysics Data System (ADS)
Peng, Heng; Liu, Yinghua; Chen, Haofeng
2018-05-01
In this paper, a novel direct method called the stress compensation method (SCM) is proposed for limit and shakedown analysis of large-scale elastoplastic structures. Without needing to solve the specific mathematical programming problem, the SCM is a two-level iterative procedure based on a sequence of linear elastic finite element solutions where the global stiffness matrix is decomposed only once. In the inner loop, the static admissible residual stress field for shakedown analysis is constructed. In the outer loop, a series of decreasing load multipliers are updated to approach to the shakedown limit multiplier by using an efficient and robust iteration control technique, where the static shakedown theorem is adopted. Three numerical examples up to about 140,000 finite element nodes confirm the applicability and efficiency of this method for two-dimensional and three-dimensional elastoplastic structures, with detailed discussions on the convergence and the accuracy of the proposed algorithm.
Iterative non-sequential protein structural alignment.
Salem, Saeed; Zaki, Mohammed J; Bystroff, Christopher
2009-06-01
Structural similarity between proteins gives us insights into their evolutionary relationships when there is low sequence similarity. In this paper, we present a novel approach called SNAP for non-sequential pair-wise structural alignment. Starting from an initial alignment, our approach iterates over a two-step process consisting of a superposition step and an alignment step, until convergence. We propose a novel greedy algorithm to construct both sequential and non-sequential alignments. The quality of SNAP alignments were assessed by comparing against the manually curated reference alignments in the challenging SISY and RIPC datasets. Moreover, when applied to a dataset of 4410 protein pairs selected from the CATH database, SNAP produced longer alignments with lower rmsd than several state-of-the-art alignment methods. Classification of folds using SNAP alignments was both highly sensitive and highly selective. The SNAP software along with the datasets are available online at http://www.cs.rpi.edu/~zaki/software/SNAP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samala, Ravi K., E-mail: rsamala@umich.edu; Chan, Heang-Ping; Lu, Yao
Purpose: Develop a computer-aided detection (CADe) system for clustered microcalcifications in digital breast tomosynthesis (DBT) volume enhanced with multiscale bilateral filtering (MSBF) regularization. Methods: With Institutional Review Board approval and written informed consent, two-view DBT of 154 breasts, of which 116 had biopsy-proven microcalcification (MC) clusters and 38 were free of MCs, was imaged with a General Electric GEN2 prototype DBT system. The DBT volumes were reconstructed with MSBF-regularized simultaneous algebraic reconstruction technique (SART) that was designed to enhance MCs and reduce background noise while preserving the quality of other tissue structures. The contrast-to-noise ratio (CNR) of MCs was furthermore » improved with enhancement-modulated calcification response (EMCR) preprocessing, which combined multiscale Hessian response to enhance MCs by shape and bandpass filtering to remove the low-frequency structured background. MC candidates were then located in the EMCR volume using iterative thresholding and segmented by adaptive region growing. Two sets of potential MC objects, cluster centroid objects and MC seed objects, were generated and the CNR of each object was calculated. The number of candidates in each set was controlled based on the breast volume. Dynamic clustering around the centroid objects grouped the MC candidates to form clusters. Adaptive criteria were designed to reduce false positive (FP) clusters based on the size, CNR values and the number of MCs in the cluster, cluster shape, and cluster based maximum intensity projection. Free-response receiver operating characteristic (FROC) and jackknife alternative FROC (JAFROC) analyses were used to assess the performance and compare with that of a previous study. Results: Unpaired two-tailedt-test showed a significant increase (p < 0.0001) in the ratio of CNRs for MCs with and without MSBF regularization compared to similar ratios for FPs. For view-based detection, a sensitivity of 85% was achieved at an FP rate of 2.16 per DBT volume. For case-based detection, a sensitivity of 85% was achieved at an FP rate of 0.85 per DBT volume. JAFROC analysis showed a significant improvement in the performance of the current CADe system compared to that of our previous system (p = 0.003). Conclusions: MBSF regularized SART reconstruction enhances MCs. The enhancement in the signals, in combination with properly designed adaptive threshold criteria, effective MC feature analysis, and false positive reduction techniques, leads to a significant improvement in the detection of clustered MCs in DBT.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatayama, Ariyoshi; Ogasawara, Masatada; Yamauchi, Michinori
1994-08-01
Plasma size and other basic performance parameters for 1000-MW(electric) power production are calculated with the blanket energy multiplication factor, the M value, as a parameter. The calculational model is base don the International Thermonuclear Experimental Reactor (ITER) physics design guidelines and includes overall plant power flow. Plasma size decreases as the M value increases. However, the improvement in the plasma compactness and other basic performance parameters, such as the total plant power efficiency, becomes saturated above the M = 5 to 7 range. THus, a value in the M = 5 to 7 range is a reasonable choice for 1000-MW(electric)more » hybrids. Typical plasma parameters for 1000-MW(electric) hybrids with a value of M = 7 are a major radius of R = 5.2 m, minor radius of a = 1.7 m, plasma current of I{sub p} = 15 MA, and toroidal field on the axis of B{sub o} = 5 T. The concept of a thermal fission blanket that uses light water as a coolant is selected as an attractive candidate for electricity-producing hybrids. An optimization study is carried out for this blanket concept. The result shows that a compact, simple structure with a uniform fuel composition for the fissile region is sufficient to obtain optimal conditions for suppressing the thermal power increase caused by fuel burnup. The maximum increase in the thermal power is +3.2%. The M value estimated from the neutronics calculations is {approximately}7.0, which is confirmed to be compatible with the plasma requirement. These studies show that it is possible to use a tokamak fusion core with design requirements similar to those of ITER for a 1000-MW(electric) power reactor that uses existing thermal reactor technology for the blanket. 30 refs., 22 figs., 4 tabs.« less
Berendonk, Christoph; Schirlo, Christian; Balestra, Gianmarco; Bonvin, Raphael; Feller, Sabine; Huber, Philippe; Jünger, Ernst; Monti, Matteo; Schnabel, Kai; Beyeler, Christine; Guttormsen, Sissel; Huwendiek, Sören
2015-01-01
Objective: Since 2011, the new national final examination in human medicine has been implemented in Switzerland, with a structured clinical-practical part in the OSCE format. From the perspective of the national Working Group, the current article describes the essential steps in the development, implementation and evaluation of the Federal Licensing Examination Clinical Skills (FLE CS) as well as the applied quality assurance measures. Finally, central insights gained from the last years are presented. Methods: Based on the principles of action research, the FLE CS is in a constant state of further development. On the foundation of systematically documented experiences from previous years, in the Working Group, unresolved questions are discussed and resulting solution approaches are substantiated (planning), implemented in the examination (implementation) and subsequently evaluated (reflection). The presented results are the product of this iterative procedure. Results: The FLE CS is created by experts from all faculties and subject areas in a multistage process. The examination is administered in German and French on a decentralised basis and consists of twelve interdisciplinary stations per candidate. As important quality assurance measures, the national Review Board (content validation) and the meetings of the standardised patient trainers (standardisation) have proven worthwhile. The statistical analyses show good measurement reliability and support the construct validity of the examination. Among the central insights of the past years, it has been established that the consistent implementation of the principles of action research contributes to the successful further development of the examination. Conclusion: The centrally coordinated, collaborative-iterative process, incorporating experts from all faculties, makes a fundamental contribution to the quality of the FLE CS. The processes and insights presented here can be useful for others planning a similar undertaking. PMID:26483853
Self-prior strategy for organ reconstruction in fluorescence molecular tomography
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-01-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.
Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less
Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.
2017-06-26
Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less
NASA Astrophysics Data System (ADS)
Betté, Srinivas; Diaz, Julio C.; Jines, William R.; Steihaug, Trond
1986-11-01
A preconditioned residual-norm-reducing iterative solver is described. Based on a truncated form of the generalized-conjugate-gradient method for nonsymmetric systems of linear equations, the iterative scheme is very effective for linear systems generated in reservoir simulation of thermal oil recovery processes. As a consequence of employing an adaptive implicit finite-difference scheme to solve the model equations, the number of variables per cell-block varies dynamically over the grid. The data structure allows for 5- and 9-point operators in the areal model, 5-point in the cross-sectional model, and 7- and 11-point operators in the three-dimensional model. Block-diagonal-scaling of the linear system, done prior to iteration, is found to have a significant effect on the rate of convergence. Block-incomplete-LU-decomposition (BILU) and block-symmetric-Gauss-Seidel (BSGS) methods, which result in no fill-in, are used as preconditioning procedures. A full factorization is done on the well terms, and the cells are ordered in a manner which minimizes the fill-in in the well-column due to this factorization. The convergence criterion for the linear (inner) iteration is linked to that of the nonlinear (Newton) iteration, thereby enhancing the efficiency of the computation. The algorithm, with both BILU and BSGS preconditioners, is evaluated in the context of a variety of thermal simulation problems. The solver is robust and can be used with little or no user intervention.
MIDAS: a practical Bayesian design for platform trials with molecularly targeted agents.
Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir
2016-09-30
Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time and thus not efficient for this task. We propose a Bayesian phase II platform design, the multi-candidate iterative design with adaptive selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and 'graduate' the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
MIDAS: A Practical Bayesian Design for Platform Trials with Molecularly Targeted Agents
Yuan, Ying; Guo, Beibei; Munsell, Mark; Lu, Karen; Jazaeri, Amir
2016-01-01
Recent success of immunotherapy and other targeted therapies in cancer treatment has led to an unprecedented surge in the number of novel therapeutic agents that need to be evaluated in clinical trials. Traditional phase II clinical trial designs were developed for evaluating one candidate treatment at a time, and thus not efficient for this task. We propose a Bayesian phase II platform design, the Multi-candidate Iterative Design with Adaptive Selection (MIDAS), which allows investigators to continuously screen a large number of candidate agents in an efficient and seamless fashion. MIDAS consists of one control arm, which contains a standard therapy as the control, and several experimental arms, which contain the experimental agents. Patients are adaptively randomized to the control and experimental agents based on their estimated efficacy. During the trial, we adaptively drop inefficacious or overly toxic agents and “graduate” the promising agents from the trial to the next stage of development. Whenever an experimental agent graduates or is dropped, the corresponding arm opens immediately for testing the next available new agent. Simulation studies show that MIDAS substantially outperforms the conventional approach. The proposed design yields a significantly higher probability for identifying the promising agents and dropping the futile agents. In addition, MIDAS requires only one master protocol, which streamlines trial conduct and substantially decreases the overhead burden. PMID:27112322
Corwin, Lisa A; Runyon, Christopher R; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C; Reichler, Stuart; Rodenbusch, Stacia E; Dolan, Erin L
2018-06-01
Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course work. Yet how these elements affect students' intentions to pursue research-related careers remain unexplored. To address this knowledge gap, we collected data on three design features thought to be distinctive of CUREs (discovery, iteration, collaboration) and on students' levels of ownership and career intentions from ∼800 undergraduates who had completed CURE or inquiry courses, including courses from the Freshman Research Initiative (FRI), which has a demonstrated positive effect on student retention in college and in science, technology, engineering, and mathematics. We used structural equation modeling to test relationships among the design features and student ownership and career intentions. We found that discovery, iteration, and collaboration had small but significant effects on students' intentions; these effects were fully mediated by student ownership. Students in FRI courses reported significantly higher levels of discovery, iteration, and ownership than students in other CUREs. FRI research courses alone had a significant effect on students' career intentions.
Adaptively Tuned Iterative Low Dose CT Image Denoising
Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.
2015-01-01
Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972
A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems
NASA Astrophysics Data System (ADS)
Hoag, Lucy M.
The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The approach was tested during the design of USC's Aeneas nanosatellite project, and a case study was performed to assess the advantages of the new approach over past procedural approaches. It was found that use of the declarative approach improved design accuracy through exhaustive tradespace search and provable optimality; decreased design time through improved model generation, faster run time, and reduction in time and number of iteration cycles; and enabled modular and extensible code. Observed weaknesses included non-intuitive model abstraction; increased debugging time; and difficulty of data extrapolation and analysis.
Progress with new malaria vaccines.
Webster, Daniel; Hill, Adrian V. S.
2003-01-01
Malaria is a parasitic disease of major global health significance that causes an estimated 2.7 million deaths each year. In this review we describe the burden of malaria and discuss the complicated life cycle of Plasmodium falciparum, the parasite responsible for most of the deaths from the disease, before reviewing the evidence that suggests that a malaria vaccine is an attainable goal. Significant advances have recently been made in vaccine science, and we review new vaccine technologies and the evaluation of candidate malaria vaccines in human and animal studies worldwide. Finally, we discuss the prospects for a malaria vaccine and the need for iterative vaccine development as well as potential hurdles to be overcome. PMID:14997243
Three-D Flow Analysis of the Alternate SSME HPOT TAD
NASA Technical Reports Server (NTRS)
Kubinski, Cheryl A.
1993-01-01
This paper describes the results of numerical flow analyses performed in support of design development of the Space Shuttle Main Engine Alternate High Pressure Oxidizer Turbine Turn-around duct (TAD). The flow domain has been modeled using a 3D, Navier-Stokes, general purpose flow solver. The goal of this effort is to achieve an alternate TAD exit flow distribution which closely matches that of the baseline configuration. 3D Navier Stokes CFD analyses were employed to evaluate numerous candidate geometry modifications to the TAD flowpath in order to achieve this goal. The design iterations are summarized, as well as a description of the computational model, numerical results and the conclusions based on these calculations.
NASA Technical Reports Server (NTRS)
Jacobs, Gilda
1990-01-01
A study of space suit structures and materials is under way at NASA Ames Research Center, Moffett Field, CA. The study was initiated by the need for a generation of lightweight space suits to be used in future planetary Exploration Missions. This paper provides a brief description of the Lunar and Mars environments and reviews what has been done in the past in the design and development of fabric, metal, and composite suit components in order to establish criteria for comparison of promising candidate materials and space suit structures. Environmental factors and mission scenarios will present challenging material and structural requirements; thus, a program is planned to outline the methodology used to identify materials and processes for producing candidate space suit structures which meet those requirements.
Multi-scale analysis and characterization of the ITER pre-compression rings
NASA Astrophysics Data System (ADS)
Foussat, A.; Park, B.; Rajainmaki, H.
2014-01-01
The toroidal field (TF) system of ITER Tokamak composed of 18 "D" shaped Toroidal Field (TF) coils during an operating scenario experiences out-of-plane forces caused by the interaction between the 68kA operating TF current and the poloidal magnetic fields. In order to keep the induced static and cyclic stress range in the intercoil shear keys between coils cases within the ITER allowable limits [1], centripetal preload is introduced by means of S2 fiber-glass/epoxy composite pre-compression rings (PCRs). Those PCRs consist in two sets of three rings, each 5 m in diameter and 337 × 288 mm in cross-section, and are installed at the top and bottom regions to apply a total resultant preload of 70 MN per TF coil equivalent to about 400 MPa hoop stress. Recent developments of composites in the aerospace industry have accelerated the use of advanced composites as primary structural materials. The PCRs represent one of the most challenging composite applications of large dimensions and highly stressed structures operating at 4 K over a long term life. Efficient design of those pre-compression composite structures requires a detailed understanding of both the failure behavior of the structure and the fracture behavior of the material. Due to the inherent difficulties to carry out real scale testing campaign, there is a need to develop simulation tools to predict the multiple complex failure mechanisms in pre-compression rings. A framework contract was placed by ITER Organization with SENER Ingenieria y Sistemas SA to develop multi-scale models representative of the composite structure of the Pre-compression rings based on experimental material data. The predictive modeling based on ABAQUS FEM provides the opportunity both to understand better how PCR composites behave in operating conditions and to support the development of materials by the supplier with enhanced performance to withstand the machine design lifetime of 30,000 cycles. The multi-scale stress analysis has revealed a complete picture of the stress levels within the fiber and the matrix regarding the static and fatigue performance of the rings structure including the presence of a delamination defect of critical size. The analysis results of the composite material demonstrate that the rings performance objectives under all loading and strength conditions are met.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
NASA Astrophysics Data System (ADS)
Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho
2015-01-01
Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.
Invariants, Attractors and Bifurcation in Two Dimensional Maps with Polynomial Interaction
NASA Astrophysics Data System (ADS)
Hacinliyan, Avadis Simon; Aybar, Orhan Ozgur; Aybar, Ilknur Kusbeyzi
This work will present an extended discrete-time analysis on maps and their generalizations including iteration in order to better understand the resulting enrichment of the bifurcation properties. The standard concepts of stability analysis and bifurcation theory for maps will be used. Both iterated maps and flows are used as models for chaotic behavior. It is well known that when flows are converted to maps by discretization, the equilibrium points remain the same but a richer bifurcation scheme is observed. For example, the logistic map has a very simple behavior as a differential equation but as a map fold and period doubling bifurcations are observed. A way to gain information about the global structure of the state space of a dynamical system is investigating invariant manifolds of saddle equilibrium points. Studying the intersections of the stable and unstable manifolds are essential for understanding the structure of a dynamical system. It has been known that the Lotka-Volterra map and systems that can be reduced to it or its generalizations in special cases involving local and polynomial interactions admit invariant manifolds. Bifurcation analysis of this map and its higher iterates can be done to understand the global structure of the system and the artifacts of the discretization by comparing with the corresponding results from the differential equation on which they are based.
The fractal geometry of Hartree-Fock
NASA Astrophysics Data System (ADS)
Theel, Friethjof; Karamatskou, Antonia; Santra, Robin
2017-12-01
The Hartree-Fock method is an important approximation for the ground-state electronic wave function of atoms and molecules so that its usage is widespread in computational chemistry and physics. The Hartree-Fock method is an iterative procedure in which the electronic wave functions of the occupied orbitals are determined. The set of functions found in one step builds the basis for the next iteration step. In this work, we interpret the Hartree-Fock method as a dynamical system since dynamical systems are iterations where iteration steps represent the time development of the system, as encountered in the theory of fractals. The focus is put on the convergence behavior of the dynamical system as a function of a suitable control parameter. In our case, a complex parameter λ controls the strength of the electron-electron interaction. An investigation of the convergence behavior depending on the parameter λ is performed for helium, neon, and argon. We observe fractal structures in the complex λ-plane, which resemble the well-known Mandelbrot set, determine their fractal dimension, and find that with increasing nuclear charge, the fragmentation increases as well.
Ramachandra, Ranjan; de Jonge, Niels
2012-01-01
Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090
NASA Astrophysics Data System (ADS)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; Ng, Esmond G.; Maris, Pieter; Vary, James P.
2018-01-01
We describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. The use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. We also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.
Reliability of a structured interview for admission to an emergency medicine residency program.
Blouin, Danielle
2010-10-01
Interviews are most important in resident selection. Structured interviews are more reliable than unstructured ones. We sought to measure the interrater reliability of a newly designed structured interview during the selection process to an Emergency Medicine residency program. The critical incident technique was used to extract the desired dimensions of performance. The interview tool consisted of 7 clinical scenarios and 1 global rating. Three trained interviewers marked each candidate on all scenarios without discussing candidates' responses. Interitem consistency and estimates of variance were computed. Twenty-eight candidates were interviewed. The generalizability coefficient was 0.67. Removing the central tendency ratings increased the coefficient to 0.74. Coefficients of interitem consistency ranged from 0.64 to 0.74. The structured interview tool provided good although suboptimal interrater reliability. Increasing the number of scenarios improves reliability as does applying differential weights to the rating scale anchors. The latter would also facilitate the identification of those candidates with extreme ratings.
L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann
2005-01-01
In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...
On Nonequivalence of Several Procedures of Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Chan, Wai
2005-01-01
The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…
Novel Mycosin Protease MycP1 Inhibitors Identified by Virtual Screening and 4D Fingerprints
2015-01-01
The rise of drug-resistant Mycobacterium tuberculosis lends urgency to the need for new drugs for the treatment of tuberculosis (TB). The identification of a serine protease, mycosin protease-1 (MycP1), as the crucial agent in hydrolyzing the virulence factor, ESX-secretion-associated protein B (EspB), potentially opens the door to new tuberculosis treatment options. Using the crystal structure of mycobacterial MycP1 in the apo form, we performed an iterative ligand- and structure-based virtual screening (VS) strategy to identify novel, nonpeptide, small-molecule inhibitors against MycP1 protease. Screening of ∼485 000 ligands from databases at the Genomics Research Institute (GRI) at the University of Cincinnati and the National Cancer Institute (NCI) using our VS approach, which integrated a pharmacophore model and consensus molecular shape patterns of active ligands (4D fingerprints), identified 81 putative inhibitors, and in vitro testing subsequently confirmed two of them as active inhibitors. Thereafter, the lead structures of each VS round were used to generate a new 4D fingerprint that enabled virtual rescreening of the chemical libraries. Finally, the iterative process identified a number of diverse scaffolds as lead compounds that were tested and found to have micromolar IC50 values against the MycP1 target. This study validated the efficiency of the SABRE 4D fingerprints as a means of identifying novel lead compounds in each screening round of the databases. Together, these results underscored the value of using a combination of in silico iterative ligand- and structure-based virtual screening of chemical libraries with experimental validation for the identification of promising structural scaffolds, such as the MycP1 inhibitors. PMID:24628123
PLAN2D - A PROGRAM FOR ELASTO-PLASTIC ANALYSIS OF PLANAR FRAMES
NASA Technical Reports Server (NTRS)
Lawrence, C.
1994-01-01
PLAN2D is a FORTRAN computer program for the plastic analysis of planar rigid frame structures. Given a structure and loading pattern as input, PLAN2D calculates the ultimate load that the structure can sustain before collapse. Element moments and plastic hinge rotations are calculated for the ultimate load. The location of hinges required for a collapse mechanism to form are also determined. The program proceeds in an iterative series of linear elastic analyses. After each iteration the resulting elastic moments in each member are compared to the reserve plastic moment capacity of that member. The member or members that have moments closest to their reserve capacity will determine the minimum load factor and the site where the next hinge is to be inserted. Next, hinges are inserted and the structural stiffness matrix is reformulated. This cycle is repeated until the structure becomes unstable. At this point the ultimate collapse load is calculated by accumulating the minimum load factor from each previous iteration and multiplying them by the original input loads. PLAN2D is based on the program STAN, originally written by Dr. E.L. Wilson at U.C. Berkeley. PLAN2D has several limitations: 1) Although PLAN2D will detect unloading of hinges it does not contain the capability to remove hinges; 2) PLAN2D does not allow the user to input different positive and negative moment capacities and 3) PLAN2D does not consider the interaction between axial and plastic moment capacity. Axial yielding and buckling is ignored as is the reduction in moment capacity due to axial load. PLAN2D is written in FORTRAN and is machine independent. It has been tested on an IBM PC and a DEC MicroVAX. The program was developed in 1988.
NASA Astrophysics Data System (ADS)
Li, Zhengguang; Lai, Siu-Kai; Wu, Baisheng
2018-07-01
Determining eigenvector derivatives is a challenging task due to the singularity of the coefficient matrices of the governing equations, especially for those structural dynamic systems with repeated eigenvalues. An effective strategy is proposed to construct a non-singular coefficient matrix, which can be directly used to obtain the eigenvector derivatives with distinct and repeated eigenvalues. This approach also has an advantage that only requires eigenvalues and eigenvectors of interest, without solving the particular solutions of eigenvector derivatives. The Symmetric Quasi-Minimal Residual (SQMR) method is then adopted to solve the governing equations, only the existing factored (shifted) stiffness matrix from an iterative eigensolution such as the subspace iteration method or the Lanczos algorithm is utilized. The present method can deal with both cases of simple and repeated eigenvalues in a unified manner. Three numerical examples are given to illustrate the accuracy and validity of the proposed algorithm. Highly accurate approximations to the eigenvector derivatives are obtained within a few iteration steps, making a significant reduction of the computational effort. This method can be incorporated into a coupled eigensolver/derivative software module. In particular, it is applicable for finite element models with large sparse matrices.
Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne
2016-06-01
The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
Estimation of carbon fibre composites as ITER divertor armour
NASA Astrophysics Data System (ADS)
Pestchanyi, S.; Safronov, V.; Landman, I.
2004-08-01
Exposure of the carbon fibre composites (CFC) NB31 and NS31 by multiple plasma pulses has been performed at the plasma guns MK-200UG and QSPA. Numerical simulation for the same CFCs under ITER type I ELM typical heat load has been carried out using the code PEGASUS-3D. Comparative analysis of the numerical and experimental results allowed understanding the erosion mechanism of CFC based on the simulation results. A modification of CFC structure has been proposed in order to decrease the armour erosion rate.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Spectral Analysis for Weighted Iterated Triangulations of Graphs
NASA Astrophysics Data System (ADS)
Chen, Yufei; Dai, Meifeng; Wang, Xiaoqian; Sun, Yu; Su, Weiyi
Much information about the structural properties and dynamical aspects of a network is measured by the eigenvalues of its normalized Laplacian matrix. In this paper, we aim to present a first study on the spectra of the normalized Laplacian of weighted iterated triangulations of graphs. We analytically obtain all the eigenvalues, as well as their multiplicities from two successive generations. As an example of application of these results, we then derive closed-form expressions for their multiplicative Kirchhoff index, Kemeny’s constant and number of weighted spanning trees.
Network structures between strategies in iterated prisoners' dilemma games
NASA Astrophysics Data System (ADS)
Kim, Young Jin; Roh, Myungkyoon; Son, Seung-Woo
2014-02-01
We use replicator dynamics to study an iterated prisoners' dilemma game with memory. In this study, we investigate the characteristics of all 32 possible strategies with a single-step memory by observing the results when each strategy encounters another one. Based on these results, we define similarity measures between the 32 strategies and perform a network analysis of the relationship between the strategies by constructing a strategies network. Interestingly, we find that a win-lose circulation, like rock-paper-scissors, exists between strategies and that the circulation results from one unusual strategy.
Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Modal test/analysis correlation of Space Station structures using nonlinear sensitivity
NASA Technical Reports Server (NTRS)
Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan
1992-01-01
The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.
Manufacture and Quality Control of Insert Coil with Real ITER TF Conductor
Ozeki, H.; Isono, T.; Uno, Y.; ...
2016-03-02
JAEA successfully completed the manufacture of the toroidal field (TF) insert coil (TFIC) for a performance test of the ITER TF conductor in the final design in cooperation with Hitachi, Ltd. The TFIC is a single-layer 8.875-turn solenoid coil with 1.44-m diameter. This will be tested for 68-kA current application in a 13-T external magnetic field. TFIC was manufactured in the following order: winding of the TF conductor, lead bending, fabrication of the electrical termination, heat treatment, turn insulation, installation of the coil into the support mandrel structure, vacuum pressure impregnation (VPI), structure assembly, and instrumentation. Here in this presentation,more » manufacture process and quality control status for the TFIC manufacturing are reported.« less
NASA Astrophysics Data System (ADS)
Zhang, Langwen; Xie, Wei; Wang, Jingcheng
2017-11-01
In this work, synthesis of robust distributed model predictive control (MPC) is presented for a class of linear systems subject to structured time-varying uncertainties. By decomposing a global system into smaller dimensional subsystems, a set of distributed MPC controllers, instead of a centralised controller, are designed. To ensure the robust stability of the closed-loop system with respect to model uncertainties, distributed state feedback laws are obtained by solving a min-max optimisation problem. The design of robust distributed MPC is then transformed into solving a minimisation optimisation problem with linear matrix inequality constraints. An iterative online algorithm with adjustable maximum iteration is proposed to coordinate the distributed controllers to achieve a global performance. The simulation results show the effectiveness of the proposed robust distributed MPC algorithm.
Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon
2015-01-01
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. PMID:25954956
A relational learning approach to Structure-Activity Relationships in drug design toxicity studies.
Camacho, Rui; Pereira, Max; Costa, Vítor Santos; Fonseca, Nuno A; Adriano, Carlos; Simões, Carlos J V; Brito, Rui M M
2011-09-16
It has been recognized that the development of new therapeutic drugs is a complex and expensive process. A large number of factors affect the activity in vivo of putative candidate molecules and the propensity for causing adverse and toxic effects is recognized as one of the major hurdles behind the current "target-rich, lead-poor" scenario. Structure-Activity Relationship (SAR) studies, using relational Machine Learning (ML) algorithms, have already been shown to be very useful in the complex process of rational drug design. Despite the ML successes, human expertise is still of the utmost importance in the drug development process. An iterative process and tight integration between the models developed by ML algorithms and the know-how of medicinal chemistry experts would be a very useful symbiotic approach. In this paper we describe a software tool that achieves that goal--iLogCHEM. The tool allows the use of Relational Learners in the task of identifying molecules or molecular fragments with potential to produce toxic effects, and thus help in stream-lining drug design in silico. It also allows the expert to guide the search for useful molecules without the need to know the details of the algorithms used. The models produced by the algorithms may be visualized using a graphical interface, that is of common use amongst researchers in structural biology and medicinal chemistry. The graphical interface enables the expert to provide feedback to the learning system. The developed tool has also facilities to handle the similarity bias typical of large chemical databases. For that purpose the user can filter out similar compounds when assembling a data set. Additionally, we propose ways of providing background knowledge for Relational Learners using the results of Graph Mining algorithms. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.
NASA Astrophysics Data System (ADS)
Kobayashi, K.; Isobe, K.; Iwai, Y.; Hayashi, T.; Shu, W.; Nakamura, H.; Kawamura, Y.; Yamada, M.; Suzuki, T.; Miura, H.; Uzawa, M.; Nishikawa, M.; Yamanishi, T.
2007-12-01
Confinement and the removal of tritium are key subjects for the safety of ITER. The ITER buildings are confinement barriers of tritium. In a hot cell, tritium is often released as vapour and is in contact with the inner walls. The inner walls of the ITER tritium plant building will also be exposed to tritium in an accident. The tritium released in the buildings is removed by the atmosphere detritiation systems (ADS), where the tritium is oxidized by catalysts and is removed as water. A special gas of SF6 is used in ITER and is expected to be released in an accident such as a fire. Although the SF6 gas has potential as a catalyst poison, the performance of ADS with the existence of SF6 has not been confirmed as yet. Tritiated water is produced in the regeneration process of ADS and is subsequently processed by the ITER water detritiation system (WDS). One of the key components of the WDS is an electrolysis cell. To overcome the issues in a global tritium confinement, a series of experimental studies have been carried out as an ITER R&D task: (1) tritium behaviour in concrete; (2) the effect of SF6 on the performance of ADS and (3) tritium durability of the electrolysis cell of the ITER-WDS. (1) The tritiated water vapour penetrated up to 50 mm into the concrete from the surface in six months' exposure. The penetration rate of tritium in the concrete was thus appreciably first, the isotope exchange capacity of the cement paste plays an important role in tritium trapping and penetration into concrete materials when concrete is exposed to tritiated water vapour. It is required to evaluate the effect of coating on the penetration rate quantitatively from the actual tritium tests. (2) SF6 gas decreased the detritiation factor of ADS. Since the effect of SF6 depends closely on its concentration, the amount of SF6 released into the tritium handling area in an accident should be reduced by some ideas of arrangement of components in the buildings. (3) It was expected that the electrolysis cell of the ITER-WDS could endure 3 years' operation under the ITER design conditions. Measuring the concentration of the fluorine ions could be a promising technique for monitoring the damage to the electrolysis cell.
NASA Astrophysics Data System (ADS)
Lee, Dong Won; Shin, Kyu In; Kim, Suk Kwon; Jin, Hyung Gon; Lee, Eo Hwak; Yoon, Jae Sung; Choi, Bo Guen; Moon, Se Youn; Hong, Bong Guen
2014-10-01
Tungsten (W) and ferritic-martensitic steel (FMS) as armor and structural materials, respectively, are the major candidates for plasma-facing components (PFCs) such as the blanket first wall (BFW) and the divertor, in a fusion reactor. In the present study, three W/FMS mockups were successfully fabricated using a hot isostatic pressing (HIP, 900 °C, 100 MPa, 1.5 hrs) with a following post-HIP heat treatment (PHHT, tempering, 750 °C, 70 MPa, 2 hrs), and the W/FMS joining method was developed based on the ITER BFW and the test blanket module (TBM) development project from 2004 to the present. Using a 10-MHz-frequency flat-type probe to ultrasonically test of the joint, we found no defects in the fabricated mockups. For confirmation of the joint integrity, a high heat flux test will be performed up to the thermal lifetime of the mockup under the proper test conditions. These conditions were determined through a preliminary analysis with conventional codes such as ANSYS-CFX for thermal-hydraulic conditions considering the test facility, the Korea heat load test facility with an electron beam (KoHLT-EB), and its water coolant system at the Korea Atomic Energy Research Institute (KAERI).
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
An Iterative Method for Problems with Multiscale Conductivity
Kim, Hyea Hyun; Minhas, Atul S.; Woo, Eung Je
2012-01-01
A model with its conductivity varying highly across a very thin layer will be considered. It is related to a stable phantom model, which is invented to generate a certain apparent conductivity inside a region surrounded by a thin cylinder with holes. The thin cylinder is an insulator and both inside and outside the thin cylinderare filled with the same saline. The injected current can enter only through the holes adopted to the thin cylinder. The model has a high contrast of conductivity discontinuity across the thin cylinder and the thickness of the layer and the size of holes are very small compared to the domain of the model problem. Numerical methods for such a model require a very fine mesh near the thin layer to resolve the conductivity discontinuity. In this work, an efficient numerical method for such a model problem is proposed by employing a uniform mesh, which need not resolve the conductivity discontinuity. The discrete problem is then solved by an iterative method, where the solution is improved by solving a simple discrete problem with a uniform conductivity. At each iteration, the right-hand side is updated by integrating the previous iterate over the thin cylinder. This process results in a certain smoothing effect on microscopic structures and our discrete model can provide a more practical tool for simulating the apparent conductivity. The convergence of the iterative method is analyzed regarding the contrast in the conductivity and the relative thickness of the layer. In numerical experiments, solutions of our method are compared to reference solutions obtained from COMSOL, where very fine meshes are used to resolve the conductivity discontinuity in the model. Errors of the voltage in L2 norm follow O(h) asymptotically and the current density matches quitewell those from the reference solution for a sufficiently small mesh size h. The experimental results present a promising feature of our approach for simulating the apparent conductivity related to changes in microscopic cellular structures. PMID:23304238
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.
Two-dimensional over-all neutronics analysis of the ITER device
NASA Astrophysics Data System (ADS)
Zimin, S.; Takatsu, Hideyuki; Mori, Seiji; Seki, Yasushi; Satoh, Satoshi; Tada, Eisuke; Maki, Koichi
1993-07-01
The present work attempts to carry out a comprehensive neutronics analysis of the International Thermonuclear Experimental Reactor (ITER) developed during the Conceptual Design Activities (CDA). The two-dimensional cylindrical over-all calculational models of ITER CDA device including the first wall, blanket, shield, vacuum vessel, magnets, cryostat and support structures were developed for this purpose with a help of the DOGII code. Two dimensional DOT 3.5 code with the FUSION-40 nuclear data library was employed for transport calculations of neutron and gamma ray fluxes, tritium breeding ratio (TBR), and nuclear heating in reactor components. The induced activity calculational code CINAC was employed for the calculations of exposure dose rate after reactor shutdown around the ITER CDA device. The two-dimensional over-all calculational model includes the design specifics such as the pebble bed Li2O/Be layered blanket, the thin double wall vacuum vessel, the concrete cryostat integrated with the over-all ITER design, the top maintenance shield plug, the additional ring biological shield placed under the top cryostat lid around the above-mentioned top maintenance shield plug etc. All the above-mentioned design specifics were included in the employed calculational models. Some alternative design options, such as the water-rich shielding blanket instead of lithium-bearing one, the additional biological shield plug at the top zone between the poloidal field (PF) coil No. 5, and the maintenance shield plug, were calculated as well. Much efforts have been focused on analyses of obtained results. These analyses aimed to obtain necessary recommendations on improving the ITER CDA design.
Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S
2015-01-01
The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
Khakinejad, Mahdiar; Ghassabi Kondalaji, Samaneh; Tafreshian, Amirmahdi; Valentine, Stephen J
2017-05-01
Gas-phase hydrogen/deuterium exchange (HDX) using D 2 O reagent and collision cross-section (CCS) measurements are utilized to monitor the ion conformers of the model peptide acetyl-PAAAAKAAAAKAAAAKAAAAK. The measurements are carried out on a home-built ion mobility instrument coupled to a linear ion trap mass spectrometer containing electron transfer dissociation (ETD) capabilities. ETD is utilized to obtain per-residue deuterium uptake data for select ion conformers, and a new algorithm is presented for interpreting the HDX data. Using molecular dynamics (MD) production data and a hydrogen accessibility scoring (HAS)-number of effective collisions (NEC) model, hypothetical HDX behavior is attributed to various in-silico candidate (CCS match) structures. The HAS-NEC model is applied to all candidate structures, and non-negative linear regression is employed to determine structure contributions resulting in the best match to deuterium uptake. The accuracy of the HAS-NEC model is tested with the comparison of predicted and experimental isotopic envelopes for several of the observed c-ions. It is proposed that gas-phase HDX can be utilized effectively as a second criterion (after CCS matching) for filtering suitable MD candidate structures. In this study, the second step of structure elucidation, 13 nominal structures were selected (from a pool of 300 candidate structures) and each with a population contribution proposed for these ions. Graphical Abstract ᅟ.
An outer approximation method for the road network design problem
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111
An outer approximation method for the road network design problem.
Asadi Bagloee, Saeed; Sarvi, Majid
2018-01-01
Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.
GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.
Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua
2018-06-19
Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.
Chen, Lei; Chu, Chen; Lu, Jing; Kong, Xiangyin; Huang, Tao; Cai, Yu-Dong
2015-09-01
Cancer is one of the leading causes of human death. Based on current knowledge, one of the causes of cancer is exposure to toxic chemical compounds, including radioactive compounds, dioxin, and arsenic. The identification of new carcinogenic chemicals may warn us of potential danger and help to identify new ways to prevent cancer. In this study, a computational method was proposed to identify potential carcinogenic chemicals, as well as non-carcinogenic chemicals. According to the current validated carcinogenic and non-carcinogenic chemicals from the CPDB (Carcinogenic Potency Database), the candidate chemicals were searched in a weighted chemical network constructed according to chemical-chemical interactions. Then, the obtained candidate chemicals were further selected by a randomization test and information on chemical interactions and structures. The analyses identified several candidate carcinogenic chemicals, while those candidates identified as non-carcinogenic were supported by a literature search. In addition, several candidate carcinogenic/non-carcinogenic chemicals exhibit structural dissimilarity with validated carcinogenic/non-carcinogenic chemicals.
2014-01-01
Background Entry into specialty training was determined by a National Assessment Centre (NAC) approach using a combination of a behavioural Multiple-Mini-Interview (MMI) and a written Situational Judgement Test (SJT). We wanted to know if interviewers could make reliable and valid decisions about the non-cognitive characteristics of candidates with the purpose of selecting them into general practice specialty training using the MMI. Second, we explored the concurrent validity of the MMI with the SJT. Methods A variance components analysis estimated the reliability and sources of measurement error. Further modelling estimated the optimal configurations for future MMI iterations. We calculated the relationship of the MMI with the SJT. Results Data were available from 1382 candidates, 254 interviewers, six MMI questions, five alternate forms of a 50-item SJT, and 11 assessment centres. For a single MMI question and one assessor, 28% of the variance between scores was due to candidate-to-candidate variation. Interviewer subjectivity, in particular the varying views that interviewer had for particular candidates accounted for 40% of the variance in scores. The generalisability co-efficient for a six question MMI was 0.7; to achieve 0.8 would require ten questions. A disattenuated correlation with the SJT (r = 0.35), and in particular a raw score correlation with the subdomain related to clinical knowledge (r = 0.25) demonstrated evidence for construct and concurrent validity. Less than two per cent of candidates would have failed the MMI. Conclusion The MMI is a moderately reliable method of assessment in the context of a National Assessment Centre approach. The largest source of error relates to aspects of interviewer subjectivity, suggesting enhanced interviewer training would be beneficial. MMIs need to be sufficiently long for precise comparison for ranking purposes. In order to justify long term sustainable use of the MMI in a postgraduate assessment centre approach, more theoretical work is required to understand how written and performance based test of non-cognitive attributes can be combined, in a way that achieves acceptable generalizability, and has validity. PMID:25123968
Design of a -1 MV dc UHV power supply for ITER NBI
NASA Astrophysics Data System (ADS)
Watanabe, K.; Yamamoto, M.; Takemoto, J.; Yamashita, Y.; Dairaku, M.; Kashiwagi, M.; Taniguchi, M.; Tobari, H.; Umeda, N.; Sakamoto, K.; Inoue, T.
2009-05-01
Procurement of a dc -1 MV power supply system for the ITER neutral beam injector (NBI) is shared by Japan and the EU. The Japan Atomic Energy Agency as the Japan Domestic Agency (JADA) for ITER contributes to the procurement of dc -1 MV ultra-high voltage (UHV) components such as a dc -1 MV generator, a transmission line and a -1 MV insulating transformer for the ITER NBI power supply. The inverter frequency of 150 Hz in the -1 MV power supply and major circuit parameters have been proposed and adopted in the ITER NBI. The dc UHV insulation has been carefully designed since dc long pulse insulation is quite different from conventional ac insulation or dc short pulse systems. A multi-layer insulation structure of the transformer for a long pulse up to 3600 s has been designed with electric field simulation. Based on the simulation the overall dimensions of the dc UHV components have been finalized. A surge energy suppression system is also essential to protect the accelerator from electric breakdowns. The JADA contributes to provide an effective surge suppression system composed of core snubbers and resistors. Input energy into the accelerator from the power supply can be reduced to about 20 J, which satisfies the design criteria of 50 J in total in the case of breakdown at -1 MV.
ERIC Educational Resources Information Center
Ge, Xun; Law, Victor; Huang, Kun
2016-01-01
One of the goals for problem-based learning (PBL) is to promote self-regulation. Although self-regulation has been studied extensively, its interrelationships with ill-structured problem solving have been unclear. In order to clarify the interrelationships, this article proposes a conceptual framework illustrating the iterative processes among…
Zhao, Qilin; Chen, Li; Shao, Guojian
2014-01-01
The axial compressive strength of unidirectional FRP made by pultrusion is generally quite lower than its axial tensile strength. This fact decreases the advantages of FRP as main load bearing member in engineering structure. A theoretical iterative calculation approach was suggested to predict the ultimate axial compressive stress of the combined structure and analyze the influences of geometrical parameters on the ultimate axial compressive stress of the combined structure. In this paper, the experimental and theoretical research on the CFRP sheet confined GFRP short pole was extended to the CFRP sheet confined GFRP short pipe, namely, a hollow section pole. Experiment shows that the bearing capacity of the GFRP short pipe can also be heightened obviously by confining CFRP sheet. The theoretical iterative calculation approach in the previous paper is amended to predict the ultimate axial compressive stress of the CFRP sheet confined GFRP short pipe, of which the results agree with the experiment. Lastly the influences of geometrical parameters on the new combined structure are analyzed. PMID:24672288
Hu, Meng; Müller, Erik; Schymanski, Emma L; Ruttkies, Christoph; Schulze, Tobias; Brack, Werner; Krauss, Martin
2018-03-01
In nontarget screening, structure elucidation of small molecules from high resolution mass spectrometry (HRMS) data is challenging, particularly the selection of the most likely candidate structure among the many retrieved from compound databases. Several fragmentation and retention prediction methods have been developed to improve this candidate selection. In order to evaluate their performance, we compared two in silico fragmenters (MetFrag and CFM-ID) and two retention time prediction models (based on the chromatographic hydrophobicity index (CHI) and on log D). A set of 78 known organic micropollutants was analyzed by liquid chromatography coupled to a LTQ Orbitrap HRMS with electrospray ionization (ESI) in positive and negative mode using two fragmentation techniques with different collision energies. Both fragmenters (MetFrag and CFM-ID) performed well for most compounds, with average ranking the correct candidate structure within the top 25% and 22 to 37% for ESI+ and ESI- mode, respectively. The rank of the correct candidate structure slightly improved when MetFrag and CFM-ID were combined. For unknown compounds detected in both ESI+ and ESI-, generally positive mode mass spectra were better for further structure elucidation. Both retention prediction models performed reasonably well for more hydrophobic compounds but not for early eluting hydrophilic substances. The log D prediction showed a better accuracy than the CHI model. Although the two fragmentation prediction methods are more diagnostic and sensitive for candidate selection, the inclusion of retention prediction by calculating a consensus score with optimized weighting can improve the ranking of correct candidates as compared to the individual methods. Graphical abstract Consensus workflow for combining fragmentation and retention prediction in LC-HRMS-based micropollutant identification.
Zheng, Wenjun; Brooks, Bernard R
2006-06-15
Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
NASA Astrophysics Data System (ADS)
Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.
2014-06-01
The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
Quantum-Inspired Multidirectional Associative Memory With a Self-Convergent Iterative Learning.
Masuyama, Naoki; Loo, Chu Kiong; Seera, Manjeevan; Kubota, Naoyuki
2018-04-01
Quantum-inspired computing is an emerging research area, which has significantly improved the capabilities of conventional algorithms. In general, quantum-inspired hopfield associative memory (QHAM) has demonstrated quantum information processing in neural structures. This has resulted in an exponential increase in storage capacity while explaining the extensive memory, and it has the potential to illustrate the dynamics of neurons in the human brain when viewed from quantum mechanics perspective although the application of QHAM is limited as an autoassociation. We introduce a quantum-inspired multidirectional associative memory (QMAM) with a one-shot learning model, and QMAM with a self-convergent iterative learning model (IQMAM) based on QHAM in this paper. The self-convergent iterative learning enables the network to progressively develop a resonance state, from inputs to outputs. The simulation experiments demonstrate the advantages of QMAM and IQMAM, especially the stability to recall reliability.
NASA Astrophysics Data System (ADS)
Whiteley, J. P.
2017-10-01
Large, incompressible elastic deformations are governed by a system of nonlinear partial differential equations. The finite element discretisation of these partial differential equations yields a system of nonlinear algebraic equations that are usually solved using Newton's method. On each iteration of Newton's method, a linear system must be solved. We exploit the structure of the Jacobian matrix to propose a preconditioner, comprising two steps. The first step is the solution of a relatively small, symmetric, positive definite linear system using the preconditioned conjugate gradient method. This is followed by a small number of multigrid V-cycles for a larger linear system. Through the use of exemplar elastic deformations, the preconditioner is demonstrated to facilitate the iterative solution of the linear systems arising. The number of GMRES iterations required has only a very weak dependence on the number of degrees of freedom of the linear systems.
Field tests of a participatory ergonomics toolkit for Total Worker Health.
Nobrega, Suzanne; Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert
2017-04-01
Growing interest in Total Worker Health ® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and teamwork skills of participants. Copyright © 2016 Elsevier Ltd. All rights reserved.
HITEMP Material and Structural Optimization Technology Transfer
NASA Technical Reports Server (NTRS)
Collier, Craig S.; Arnold, Steve (Technical Monitor)
2001-01-01
The feasibility of adding viscoelasticity and the Generalized Method of Cells (GMC) for micromechanical viscoelastic behavior into the commercial HyperSizer structural analysis and optimization code was investigated. The viscoelasticity methodology was developed in four steps. First, a simplified algorithm was devised to test the iterative time stepping method for simple one-dimensional multiple ply structures. Second, GMC code was made into a callable subroutine and incorporated into the one-dimensional code to test the accuracy and usability of the code. Third, the viscoelastic time-stepping and iterative scheme was incorporated into HyperSizer for homogeneous, isotropic viscoelastic materials. Finally, the GMC was included in a version of HyperSizer. MS Windows executable files implementing each of these steps is delivered with this report, as well as source code. The findings of this research are that both viscoelasticity and GMC are feasible and valuable additions to HyperSizer and that the door is open for more advanced nonlinear capability, such as viscoplasticity.
NASA Astrophysics Data System (ADS)
Masuzaki, S.; Tokitani, M.; Otsuka, T.; Oya, Y.; Hatano, Y.; Miyamoto, M.; Sakamoto, R.; Ashikawa, N.; Sakurada, S.; Uemura, Y.; Azuma, K.; Yumizuru, K.; Oyaizu, M.; Suzuki, T.; Kurotaki, H.; Hamaguchi, D.; Isobe, K.; Asakura, N.; Widdowson, A.; Heinola, K.; Jachmich, S.; Rubel, M.; contributors, JET
2017-12-01
Results of the comprehensive surface analyses of divertor tiles and dusts retrieved from JET after the first ITER-like wall campaign (2011-2012) are presented. The samples cored from the divertor tiles were analyzed. Numerous nano-size bubble-like structures were observed in the deposition layer on the apron of the inner divertor tile, and a beryllium dust with the same structures were found in the matter collected from the inner divertor after the campaign. This suggests that the nano-size bubble-like structures can make the deposition layer to become brittle and may lead to cracking followed by dust generation. X-ray photoelectron spectroscopy analyses of chemical states of species in the deposition layers identified the formation of beryllium-tungsten intermetallic compounds on an inner vertical tile. Different tritium retention profiles along the divertor tiles were observed at the top surfaces and at deeper regions of the tiles by using the imaging plate technique.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Impurity seeding for tokamak power exhaust: from present devices via ITER to DEMO
NASA Astrophysics Data System (ADS)
Kallenbach, A.; Bernert, M.; Dux, R.; Casali, L.; Eich, T.; Giannone, L.; Herrmann, A.; McDermott, R.; Mlynek, A.; Müller, H. W.; Reimold, F.; Schweinzer, J.; Sertoli, M.; Tardini, G.; Treutterer, W.; Viezzer, E.; Wenninger, R.; Wischmeier, M.; the ASDEX Upgrade Team
2013-12-01
A future fusion reactor is expected to have all-metal plasma facing materials (PFMs) to ensure low erosion rates, low tritium retention and stability against high neutron fluences. As a consequence, intrinsic radiation losses in the plasma edge and divertor are low in comparison to devices with carbon PFMs. To avoid localized overheating in the divertor, intrinsic low-Z and medium-Z impurities have to be inserted into the plasma to convert a major part of the power flux into radiation and to facilitate partial divertor detachment. For burning plasma conditions in ITER, which operates not far above the L-H threshold power, a high divertor radiation level will be mandatory to avoid thermal overload of divertor components. Moreover, in a prototype reactor, DEMO, a high main plasma radiation level will be required in addition for dissipation of the much higher alpha heating power. For divertor plasma conditions in present day tokamaks and in ITER, nitrogen appears most suitable regarding its radiative characteristics. If elevated main chamber radiation is desired as well, argon is the best candidate for the simultaneous enhancement of core and divertor radiation, provided sufficient divertor compression can be obtained. The parameter Psep/R, the power flux through the separatrix normalized by the major radius, is suggested as a suitable scaling (for a given electron density) for the extrapolation of present day divertor conditions to larger devices. The scaling for main chamber radiation from small to large devices has a higher, more favourable dependence of about Prad,main/R2. Krypton provides the smallest fuel dilution for DEMO conditions, but has a more centrally peaked radiation profile compared to argon. For investigation of the different effects of main chamber and divertor radiation and for optimization of their distribution, a double radiative feedback system has been implemented in ASDEX Upgrade (AUG). About half the ITER/DEMO values of Psep/R have been achieved so far, and close to DEMO values of Prad,main/R2, albeit at lower Psep/R. Further increase of this parameter may be achieved by increasing the neutral pressure or improving the divertor geometry.
NASA Astrophysics Data System (ADS)
Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.
2016-09-01
Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.
Determination of structure parameters in strong-field tunneling ionization theory of molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Songfeng; Jin Cheng; College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou, Gansu 730070
2010-03-15
In the strong field molecular tunneling ionization theory of Tong et al. [Phys. Rev. A 66, 033402 (2002)], the ionization rate depends on the asymptotic wave function of the molecular orbital from which the electron is removed. The orbital wave functions obtained from standard quantum chemistry packages in general are not good enough in the asymptotic region. Here we construct a one-electron model potential for several linear molecules using density functional theory. We show that the asymptotic wave function can be improved with an iteration method and after one iteration accurate asymptotic wave functions and structure parameters are determined. Withmore » the new parameters we examine the alignment-dependent tunneling ionization probabilities for several molecules and compare with other calculations and with recent measurements, including ionization from inner molecular orbitals.« less
NASA Astrophysics Data System (ADS)
Klein, Andreas; Gerlach, Gerald
1998-09-01
This paper deals with the simulation of the fluid-structure interaction phenomena in micropumps. The proposed solution approach is based on external coupling of two different solvers, which are considered here as `black boxes'. Therefore, no specific intervention is necessary into the program code, and solvers can be exchanged arbitrarily. For the realization of the external iteration loop, two algorithms are considered: the relaxation-based Gauss-Seidel method and the computationally more extensive Newton method. It is demonstrated in terms of a simplified test case, that for rather weak coupling, the Gauss-Seidel method is sufficient. However, by simply changing the considered fluid from air to water, the two physical domains become strongly coupled, and the Gauss-Seidel method fails to converge in this case. The Newton iteration scheme must be used instead.
NASA's Platform for Cross-Disciplinary Microchannel Research
NASA Technical Reports Server (NTRS)
Son, Sang Young; Spearing, Scott; Allen, Jeffrey; Monaco, Lisa A.
2003-01-01
A team from the Structural Biology group located at the NASA Marshall Space Flight Center in Huntsville, Alabama is developing a platform suitable for cross-disciplinary microchannel research. The original objective of this engineering development effort was to deliver a multi-user flight-certified facility for iterative investigations of protein crystal growth; that is, Iterative Biological Crystallization (IBC). However, the unique capabilities of this facility are not limited to the low-gravity structural biology research community. Microchannel-based research in a number of other areas may be greatly accelerated through use of this facility. In particular, the potential for gas-liquid flow investigations and cellular biological research utilizing the exceptional pressure control and simplified coupling to macroscale diagnostics inherent in the IBC facility will be discussed. In conclusion, the opportunities for research-specific modifications to the microchannel configuration, control, and diagnostics will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michling, R.; Braun, A.; Cristescu, I.
2015-03-15
Highly tritiated water (HTW) may be generated at ITER by various processes and, due to the excessive radio toxicity, the self-radiolysis and the exceedingly corrosive property of HTW, a potential hazard is associated with its storage and process. Therefore, the capture and exchange method for HTW utilizing Molecular Sieve Beds (MSB) was investigated in view of adsorption capacity, isotopic exchange performance and process parameters. For the MSB, different types of zeolite were selected. All zeolite materials were additionally coated with platinum. The following work comprised the selection of the most efficient zeolite candidate based on detailed parametric studies during themore » H{sub 2}/D{sub 2}O laboratory scale exchange experiments (about 25 g zeolite per bed) at the Tritium Laboratory Karlsruhe (TLK). For the zeolite, characterization analytical techniques such as Infrared Spectroscopy, Thermogravimetry and online mass spectrometry were implemented. Followed by further investigation of the selected zeolite catalyst under full technical operation, a MSB (about 22 kg zeolite) was processed with hydrogen flow rates up to 60 mol*h{sup -1} and deuterated water loads up to 1.6 kg in view of later ITER processing of arising HTW. (authors)« less
NASA Astrophysics Data System (ADS)
Jeong, Junho; Kim, Seungkeun; Suk, Jinyoung
2017-12-01
In order to overcome the limited range of GPS-based techniques, vision-based relative navigation methods have recently emerged as alternative approaches for a high Earth orbit (HEO) or deep space missions. Therefore, various vision-based relative navigation systems use for proximity operations between two spacecraft. For the implementation of these systems, a sensor placement problem can occur on the exterior of spacecraft due to its limited space. To deal with the sensor placement, this paper proposes a novel methodology for a vision-based relative navigation based on multiple position sensitive diode (PSD) sensors and multiple infrared beacon modules. For the proposed method, an iterated parametric study is used based on the farthest point optimization (FPO) and a constrained extended Kalman filter (CEKF). Each algorithm is applied to set the location of the sensors and to estimate relative positions and attitudes according to each combination by the PSDs and beacons. After that, scores for the sensor placement are calculated with respect to parameters: the number of the PSDs, number of the beacons, and accuracy of relative estimates. Then, the best scoring candidate is determined for the sensor placement. Moreover, the results of the iterated estimation show that the accuracy improves dramatically, as the number of the PSDs increases from one to three.
Low-rank structure learning via nonconvex heuristic recovery.
Deng, Yue; Dai, Qionghai; Liu, Risheng; Zhang, Zengke; Hu, Sanqing
2013-03-01
In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.
A Multi-Scale Settlement Matching Algorithm Based on ARG
NASA Astrophysics Data System (ADS)
Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia
2016-06-01
Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.
Three-dimensional analysis of tokamaks and stellarators
Garabedian, Paul R.
2008-01-01
The NSTAB equilibrium and stability code and the TRAN Monte Carlo transport code furnish a simple but effective numerical simulation of essential features of present tokamak and stellarator experiments. When the mesh size is comparable to the island width, an accurate radial difference scheme in conservation form captures magnetic islands successfully despite a nested surface hypothesis imposed by the mathematics. Three-dimensional asymmetries in bifurcated numerical solutions of the axially symmetric tokamak problem are relevant to the observation of unstable neoclassical tearing modes and edge localized modes in experiments. Islands in compact stellarators with quasiaxial symmetry are easier to control, so these configurations will become good candidates for magnetic fusion if difficulties with safety and stability are encountered in the International Thermonuclear Experimental Reactor (ITER) project. PMID:18768807
Network immunization under limited budget using graph spectra
NASA Astrophysics Data System (ADS)
Zahedi, R.; Khansari, M.
2016-03-01
In this paper, we propose a new algorithm that minimizes the worst expected growth of an epidemic by reducing the size of the largest connected component (LCC) of the underlying contact network. The proposed algorithm is applicable to any level of available resources and, despite the greedy approaches of most immunization strategies, selects nodes simultaneously. In each iteration, the proposed method partitions the LCC into two groups. These are the best candidates for communities in that component, and the available resources are sufficient to separate them. Using Laplacian spectral partitioning, the proposed method performs community detection inference with a time complexity that rivals that of the best previous methods. Experiments show that our method outperforms targeted immunization approaches in both real and synthetic networks.
Developing Expertise: Using Video to Hone Teacher Candidates' Classroom Observation Skills
ERIC Educational Resources Information Center
Cuthrell, Kristen; Steadman, Sharilyn C.; Stapleton, Joy; Hodge, Elizabeth
2016-01-01
This article explores the impact of a video observation model developed for teacher candidates in an early experiences course. Video Grand Rounds (VGR) combines a structured observation protocol, videos, and directed debriefing to enhance teacher candidates' observations skills within nonstructured and field-based observations. A comparative…
Performance issues for iterative solvers in device simulation
NASA Technical Reports Server (NTRS)
Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai
1994-01-01
Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.
A new catalog of H i supershell candidates in the outer part of the Galaxy
NASA Astrophysics Data System (ADS)
Suad, L. A.; Caiafa, C. F.; Arnal, E. M.; Cichowolski, S.
2014-04-01
Aims: The main goal of this work is to a have a new neutral hydrogen (H i) supershell candidate catalog to analyze their spatial distribution in the Galaxy and to carry out a statistical study of their main properties. Methods: This catalog was carried out making use of the Leiden-Argentine-Bonn (LAB) survey. The supershell candidates were identified using a combination of two techniques: a visual inspection plus an automatic searching algorithm. Our automatic algorithm is able to detect both closed and open structures. Results: A total of 566 supershell candidates were identified. Most of them (347) are located in the second Galactic quadrant, while 219 were found in the third one. About 98% of a subset of 190 structures (used to derive the statistical properties of the supershell candidates) are elliptical with a mean weighted eccentricity of 0.8 ± 0.1, and ~70% have their major axes parallel to the Galactic plane. The weighted mean value of the effective radius of the structures is ~160 pc. Owing to the ability of our automatic algorithm to detect open structures, we have also identified some "galactic chimney" candidates. We find an asymmetry between the second and third Galactic quadrants in the sense that in the second one we detect structures as far as 32 kpc, while for the 3rd one the farthest structure is detected at 17 kpc. The supershell surface density in the solar neighborhood is ~8 kpc-2, and decreases as we move farther away form the Galactic center. We have also compared our catalog with those by other authors. Full table is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A116
A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de
2012-07-10
We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao; ...
2017-09-14
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
Evaluating the iterative development of VR/AR human factors tools for manual work.
Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna
2012-01-01
This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.
Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki
2017-01-01
Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0
Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan
2017-12-15
Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Meiyue; Aktulga, H. Metin; Yang, Chao
In this paper, we describe a number of recently developed techniques for improving the performance of large-scale nuclear configuration interaction calculations on high performance parallel computers. We show the benefit of using a preconditioned block iterative method to replace the Lanczos algorithm that has traditionally been used to perform this type of computation. The rapid convergence of the block iterative method is achieved by a proper choice of starting guesses of the eigenvectors and the construction of an effective preconditioner. These acceleration techniques take advantage of special structure of the nuclear configuration interaction problem which we discuss in detail. Themore » use of a block method also allows us to improve the concurrency of the computation, and take advantage of the memory hierarchy of modern microprocessors to increase the arithmetic intensity of the computation relative to data movement. Finally, we also discuss the implementation details that are critical to achieving high performance on massively parallel multi-core supercomputers, and demonstrate that the new block iterative solver is two to three times faster than the Lanczos based algorithm for problems of moderate sizes on a Cray XC30 system.« less
Overview of the US Fusion Materials Sciences Program
NASA Astrophysics Data System (ADS)
Zinkle, Steven
2004-11-01
The challenging fusion reactor environment (radiation, heat flux, chemical compatibility, thermo-mechanical stresses) requires utilization of advanced materials to fulfill the promise of fusion to provide safe, economical, and environmentally acceptable energy. This presentation reviews recent experimental and modeling highlights on structural materials for fusion energy. The materials requirements for fusion will be compared with other demanding technologies, including high temperature turbine components, proposed Generation IV fission reactors, and the current NASA space fission reactor project to explore the icy moons of Jupiter. A series of high-performance structural materials have been developed by fusion scientists over the past ten years with significantly improved properties compared to earlier materials. Recent advances in the development of high-performance ferritic/martensitic and bainitic steels, nanocomposited oxide dispersion strengthened ferritic steels, high-strength V alloys, improved-ductility Mo alloys, and radiation-resistant SiC composites will be reviewed. Multiscale modeling is providing important insight on radiation damage and plastic deformation mechanisms and fracture mechanics behavior. Electron microscope in-situ straining experiments are uncovering fundamental physical processes controlling deformation in irradiated metals. Fundamental modeling and experimental studies are determining the behavior of transmutant helium in metals, enabling design of materials with improved resistance to void swelling and helium embrittlement. Recent chemical compatibility tests have identified promising new candidates for magnetohydrodynamic insulators in lithium-cooled systems, and have established the basic compatibility of SiC with Pb-Li up to high temperature. Research on advanced joining techniques such as friction stir welding will be described. ITER materials research will be briefly summarized.
NASA Astrophysics Data System (ADS)
Jahandari, H.; Farquharson, C. G.
2017-11-01
Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franck, J. R.; McGaugh, S. S.
2016-12-10
The Candidate Cluster and Protocluster Catalog (CCPC) is a list of objects at redshifts z > 2 composed of galaxies with spectroscopically confirmed redshifts that are coincident on the sky and in redshift. These protoclusters are identified by searching for groups in volumes corresponding to the expected size of the most massive protoclusters at these redshifts. In CCPC1 we identified 43 candidate protoclusters among 14,000 galaxies between 2.74 < z < 3.71. Here we expand our search to more than 40,000 galaxies with spectroscopic redshifts z > 2.00, resulting in an additional 173 candidate structures. The most significant of these are 36 protoclusters withmore » overdensities δ {sub gal} > 7. We also identify three large proto-supercluster candidates containing multiple protoclusters at z = 2.3, 3.5 and z = 6.56. Eight candidates with N ≥ 10 galaxies are found at redshifts z > 4.0. The last system in the catalog is the most distant spectroscopic protocluster candidate known to date at z = 6.56.« less
Chevron beam dump for ITER edge Thomson scattering system.
Yatsuka, E; Hatae, T; Vayakis, G; Bassan, M; Itami, K
2013-10-01
This paper contains the design of the beam dump for the ITER edge Thomson scattering system and mainly concerns its lifetime under the harsh thermal and electromagnetic loads as well as tight space allocation. The lifetime was estimated from the multi-pulse laser-induced damage threshold. In order to extend its lifetime, the structure of the beam dump was optimized. A number of bent sheets aligned parallel in the beam dump form a shape called a chevron which enables it to avoid the concentration of the incident laser pulse energy. The chevron beam dump is expected to withstand thermal loads due to nuclear heating, radiation from the plasma, and numerous incident laser pulses throughout the entire ITER project with a reasonable margin for the peak factor of the beam profile. Structural analysis was also carried out in case of electromagnetic loads during a disruption. Moreover, detailed issues for more accurate assessments of the beam dump's lifetime are clarified. Variation of the bi-directional reflection distribution function (BRDF) due to erosion by or contamination of neutral particles derived from the plasma is one of the most critical issues that needs to be resolved. In this paper, the BRDF was assumed, and the total amount of stray light and the absorbed laser energy profile on the beam dump were evaluated.
Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2008-01-01
A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.
Real-Time Nonlocal Means-Based Despeckling.
Breivik, Lars Hofsoy; Snare, Sten Roar; Steen, Erik Normann; Solberg, Anne H Schistad
2017-06-01
In this paper, we propose a multiscale nonlocal means-based despeckling method for medical ultrasound. The multiscale approach leads to large computational savings and improves despeckling results over single-scale iterative approaches. We present two variants of the method. The first, denoted multiscale nonlocal means (MNLM), yields uniform robust filtering of speckle both in structured and homogeneous regions. The second, denoted unnormalized MNLM (UMNLM), is more conservative in regions of structure assuring minimal disruption of salient image details. Due to the popularity of anisotropic diffusion-based methods in the despeckling literature, we review the connection between anisotropic diffusion and iterative variants of NLM. These iterative variants in turn relate to our multiscale variant. As part of our evaluation, we conduct a simulation study making use of ground truth phantoms generated from clinical B-mode ultrasound images. We evaluate our method against a set of popular methods from the despeckling literature on both fine and coarse speckle noise. In terms of computational efficiency, our method outperforms the other considered methods. Quantitatively on simulations and on a tissue-mimicking phantom, our method is found to be competitive with the state-of-the-art. On clinical B-mode images, our method is found to effectively smooth speckle while preserving low-contrast and highly localized salient image detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dul, F.A.; Arczewski, K.
1994-03-01
Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
DEVELOPMENT OF INTERATOMIC POTENTIALS IN TUNGSTEN-RHENIUM SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Nandipati, Giridhar; Kurtz, Richard J.
2016-09-01
Reference data are generated using the ab initio method to fit interatomic potentials for the W-Re system. The reference data include single phases of W and Re, strained structures, slabs, systems containing several concentrations of vacancies, systems containing various types of interstitial defects, melt structures, structures in the σ and χ phases, and structures containing several concentrations of solid solutions of Re in bcc W and W in hcp Re. Future work will start the fitting iterations.
Analysis of Craniocardiac Malformations in Xenopus using Optical Coherence Tomography
Deniz, Engin; Jonas, Stephan; Hooper, Michael; N. Griffin, John; Choma, Michael A.; Khokha, Mustafa K.
2017-01-01
Birth defects affect 3% of children in the United States. Among the birth defects, congenital heart disease and craniofacial malformations are major causes of mortality and morbidity. Unfortunately, the genetic mechanisms underlying craniocardiac malformations remain largely uncharacterized. To address this, human genomic studies are identifying sequence variations in patients, resulting in numerous candidate genes. However, the molecular mechanisms of pathogenesis for most candidate genes are unknown. Therefore, there is a need for functional analyses in rapid and efficient animal models of human disease. Here, we coupled the frog Xenopus tropicalis with Optical Coherence Tomography (OCT) to create a fast and efficient system for testing craniocardiac candidate genes. OCT can image cross-sections of microscopic structures in vivo at resolutions approaching histology. Here, we identify optimal OCT imaging planes to visualize and quantitate Xenopus heart and facial structures establishing normative data. Next we evaluate known human congenital heart diseases: cardiomyopathy and heterotaxy. Finally, we examine craniofacial defects by a known human teratogen, cyclopamine. We recapitulate human phenotypes readily and quantify the functional and structural defects. Using this approach, we can quickly test human craniocardiac candidate genes for phenocopy as a critical first step towards understanding disease mechanisms of the candidate genes. PMID:28195132
Using Learning Trajectories for Teacher Learning to Structure Professional Development
ERIC Educational Resources Information Center
Bargagliotti, Anna E.; Anderson, Celia Rousseau
2017-01-01
As a result of the increased focus on data literacy and data science across the world, there has been a large demand for professional development in statistics. However, exactly how these professional development opportunities should be structured remains an open question. The purpose of this paper is to describe the first iteration of a design…
Complex Adaptive Systems and the Origins of Adaptive Structure: What Experiments Can Tell Us
ERIC Educational Resources Information Center
Cornish, Hannah; Tamariz, Monica; Kirby, Simon
2009-01-01
Language is a product of both biological and cultural evolution. Clues to the origins of key structural properties of language can be found in the process of cultural transmission between learners. Recent experiments have shown that iterated learning by human participants in the laboratory transforms an initially unstructured artificial language…
Lee, Hansang; Hong, Helen; Kim, Junmo
2014-12-01
We propose a graph-cut-based segmentation method for the anterior cruciate ligament (ACL) in knee MRI with a novel shape prior and label refinement. As the initial seeds for graph cuts, candidates for the ACL and the background are extracted from knee MRI roughly by means of adaptive thresholding with Gaussian mixture model fitting. The extracted ACL candidate is segmented iteratively by graph cuts with patient-specific shape constraints. Two shape constraints termed fence and neighbor costs are suggested such that the graph cuts prevent any leakage into adjacent regions with similar intensity. The segmented ACL label is refined by means of superpixel classification. Superpixel classification makes the segmented label propagate into missing inhomogeneous regions inside the ACL. In the experiments, the proposed method segmented the ACL with Dice similarity coefficient of 66.47±7.97%, average surface distance of 2.247±0.869, and root mean squared error of 3.538±1.633, which increased the accuracy by 14.8%, 40.3%, and 37.6% from the Boykov model, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Towards functional antibody-based vaccines to prevent pre-erythrocytic malaria infection.
Sack, Brandon; Kappe, Stefan H I; Sather, D Noah
2017-05-01
An effective malaria vaccine would be considered a milestone of modern medicine, yet has so far eluded research and development efforts. This can be attributed to the extreme complexity of the malaria parasites, presenting with a multi-stage life cycle, high genome complexity and the parasite's sophisticated immune evasion measures, particularly antigenic variation during pathogenic blood stage infection. However, the pre-erythrocytic (PE) early infection forms of the parasite exhibit relatively invariant proteomes, and are attractive vaccine targets as they offer multiple points of immune system attack. Areas covered: We cover the current state of and roadblocks to the development of an effective, antibody-based PE vaccine, including current vaccine candidates, limited biological knowledge, genetic heterogeneity, parasite complexity, and suboptimal preclinical models as well as the power of early stage clinical models. Expert commentary: PE vaccines will need to elicit broad and durable immunity to prevent infection. This could be achievable if recent innovations in studying the parasites' infection biology, rational vaccine selection and design as well as adjuvant formulation are combined in a synergistic and multipronged approach. Improved preclinical assays as well as the iterative testing of vaccine candidates in controlled human malaria infection trials will further accelerate this effort.
Development and validation of a new survey: Perceptions of Teaching as a Profession (PTaP)
NASA Astrophysics Data System (ADS)
Adams, Wendy
2017-01-01
To better understand the impact of efforts to train more science teachers such as the PhysTEC Project and to help with early identification of future teachers, we are developing the survey of Perceptions of Teaching as a Profession (PTaP) to measure students' views of teaching as a career, their interest in teaching and the perceived climate of physics departments towards teaching as a profession. The instrument consists of a series of statements which require a response using a 5-point Likert-scale and can be easily administered online. The survey items were drafted by a team of researchers and physics teacher candidates and then reviewed by an advisory committee of 20 physics teacher educators and practicing teachers. We conducted 27 interviews with both teacher candidates and non-teaching STEM majors. The survey was refined through an iterative process of student interviews and item clarification until all items were interpreted consistently and answered for consistent reasons. In this presentation the preliminary results from the student interviews as well as the results of item analysis and a factor analysis on 900 student responses will be shared.
ERIC Educational Resources Information Center
Kurtulus, Aytaç; Ada, Aytaç
2017-01-01
In this study, the teacher candidates who learnt to find the algebraic equation corresponding to geometric structure of the ellipse in analytic geometry classes were requested to find the algebraic representations corresponding to the structures that contained ellipses in different positions. Thus, it would be possible to determine higher order…
K-Partite RNA Secondary Structures
NASA Astrophysics Data System (ADS)
Jiang, Minghui; Tejada, Pedro J.; Lasisi, Ramoni O.; Cheng, Shanhong; Fechser, D. Scott
RNA secondary structure prediction is a fundamental problem in structural bioinformatics. The prediction problem is difficult because RNA secondary structures may contain pseudoknots formed by crossing base pairs. We introduce k-partite secondary structures as a simple classification of RNA secondary structures with pseudoknots. An RNA secondary structure is k-partite if it is the union of k pseudoknot-free sub-structures. Most known RNA secondary structures are either bipartite or tripartite. We show that there exists a constant number k such that any secondary structure can be modified into a k-partite secondary structure with approximately the same free energy. This offers a partial explanation of the prevalence of k-partite secondary structures with small k. We give a complete characterization of the computational complexities of recognizing k-partite secondary structures for all k ≥ 2, and show that this recognition problem is essentially the same as the k-colorability problem on circle graphs. We present two simple heuristics, iterated peeling and first-fit packing, for finding k-partite RNA secondary structures. For maximizing the number of base pair stackings, our iterated peeling heuristic achieves a constant approximation ratio of at most k for 2 ≤ k ≤ 5, and at most frac6{1-(1-6/k)^k} le frac6{1-e^{-6}} < 6.01491 for k ≥ 6. Experiment on sequences from PseudoBase shows that our first-fit packing heuristic outperforms the leading method HotKnots in predicting RNA secondary structures with pseudoknots. Source code, data set, and experimental results are available at
NASA Technical Reports Server (NTRS)
Ogletree, G.; Coccoli, J.; Mckern, R.; Smith, M.; White, R.
1972-01-01
The results of analytical and simulation studies of the stellar-inertial measurement system (SIMS) for an earth observation satellite are presented. Subsystem design analyses and sensor design trades are reported. Three candidate systems are considered: (1) structure-mounted gyros with structure-mounted star mapper, (2) structure-mounted gyros with gimbaled star tracker, and (3) gimbaled gyros with structure-mounted star mapper. The purpose of the study is to facilitate the decisions pertaining to gimbaled versus structure-mounted gyros and star sensors, and combinations of systems suitable for the EOS satellite.
Big Data Challenges in Global Seismic 'Adjoint Tomography' (Invited)
NASA Astrophysics Data System (ADS)
Tromp, J.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Smith, J.
2013-12-01
The challenge of imaging Earth's interior on a global scale is closely linked to the challenge of handling large data sets. The related iterative workflow involves five distinct phases, namely, 1) data gathering and culling, 2) synthetic seismogram calculations, 3) pre-processing (time-series analysis and time-window selection), 4) data assimilation and adjoint calculations, 5) post-processing (pre-conditioning, regularization, model update). In order to implement this workflow on modern high-performance computing systems, a new seismic data format is being developed. The Adaptable Seismic Data Format (ASDF) is designed to replace currently used data formats with a more flexible format that allows for fast parallel I/O. The metadata is divided into abstract categories, such as "source" and "receiver", along with provenance information for complete reproducibility. The structure of ASDF is designed keeping in mind three distinct applications: earthquake seismology, seismic interferometry, and exploration seismology. Existing time-series analysis tool kits, such as SAC and ObsPy, can be easily interfaced with ASDF so that seismologists can use robust, previously developed software packages. ASDF accommodates an automated, efficient workflow for global adjoint tomography. Manually managing the large number of simulations associated with the workflow can rapidly become a burden, especially with increasing numbers of earthquakes and stations. Therefore, it is of importance to investigate the possibility of automating the entire workflow. Scientific Workflow Management Software (SWfMS) allows users to execute workflows almost routinely. SWfMS provides additional advantages. In particular, it is possible to group independent simulations in a single job to fit the available computational resources. They also give a basic level of fault resilience as the workflow can be resumed at the correct state preceding a failure. Some of the best candidates for our particular workflow are Kepler and Swift, and the latter appears to be the most serious candidate for a large-scale workflow on a single supercomputer, remaining sufficiently simple to accommodate further modifications and improvements.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M
2014-01-01
Digital breast tomosynthesis (DBT) has strong promise to improve sensitivity for detecting breast cancer. DBT reconstruction estimates the breast tissue attenuation using projection views (PVs) acquired in a limited angular range. Because of the limited field of view (FOV) of the detector, the PVs may not completely cover the breast in the x-ray source motion direction at large projection angles. The voxels in the imaged volume cannot be updated when they are outside the FOV, thus causing a discontinuity in intensity across the FOV boundaries in the reconstructed slices, which we refer to as the truncated projection artifact (TPA). Most existing TPA reduction methods were developed for the filtered backprojection method in the context of computed tomography. In this study, we developed a new diffusion-based method to reduce TPAs during DBT reconstruction using the simultaneous algebraic reconstruction technique (SART). Our TPA reduction method compensates for the discontinuity in background intensity outside the FOV of the current PV after each PV updating in SART. The difference in voxel values across the FOV boundary is smoothly diffused to the region beyond the FOV of the current PV. Diffusion-based background intensity estimation is performed iteratively to avoid structured artifacts. The method is applicable to TPA in both the forward and backward directions of the PVs and for any number of iterations during reconstruction. The effectiveness of the new method was evaluated by comparing the visual quality of the reconstructed slices and the measured discontinuities across the TPA with and without artifact correction at various iterations. The results demonstrated that the diffusion-based intensity compensation method reduced the TPA while preserving the detailed tissue structures. The visibility of breast lesions obscured by the TPA was improved after artifact reduction. PMID:23318346
NASA Astrophysics Data System (ADS)
Suzuki, S.; Enoeda, M.; Hatano, T.; Hirose, T.; Hayashi, K.; Tanigawa, H.; Ochiai, K.; Nishitani, T.; Tobita, K.; Akiba, M.
2006-02-01
This paper presents the significant progress made in the research and development (R&D) of key technologies on the water-cooled solid breeder blanket for the ITER test blanket modules in JAERI. Development of module fabrication technology, bonding technology of armours, measurement of thermo-mechanical properties of pebble beds, neutronics studies on a blanket module mockup and tritium release behaviour from a Li2TiO3 pebble bed under neutron-pulsed operation conditions are summarized. With the improvement of the heat treatment process for blanket module fabrication, a fine-grained microstructure of F82H can be obtained by homogenizing it at 1150 °C followed by normalizing it at 930 °C after the hot isostatic pressing process. Moreover, a promising bonding process for a tungsten armour and an F82H structural material was developed using a solid-state bonding method based on uniaxial hot compression without any artificial compliant layer. As a result of high heat flux tests of F82H first wall mockups, it has been confirmed that a fatigue lifetime correlation, which was developed for the ITER divertor, can be made applicable for the F82H first wall mockup. As for R&D on the breeder material, Li2TiO3, the effect of compression loads on effective thermal conductivity of pebble beds has been clarified for the Li2TiO3 pebble bed. The tritium breeding ratio of a simulated multi-layer blanket structure has successfully been measured using 14 MeV neutrons with an accuracy of 10%. The tritium release rate from the Li2TiO3 pebble has also been successfully measured with pulsed neutron irradiation, which simulates ITER operation.
A new solution procedure for a nonlinear infinite beam equation of motion
NASA Astrophysics Data System (ADS)
Jang, T. S.
2016-10-01
Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.
On-orbit damage detection and health monitoring of large space trusses: Status and critical issues
NASA Technical Reports Server (NTRS)
Kashangaki, Thomas A. L.
1991-01-01
The long lifetimes, delicate nature and stringent pointing requirements of large space structures such as Space Station Freedom and geostationary Earth sciences platforms might require that these spacecraft be monitored periodically for possible damage to the load carrying structures. A review of the literature in damage detection and health monitoring of such structures is presented, along with a candidate structure to be used as a testbed for future work in this field. A unified notation and terminology is also proposed to facilitate comparisons between candidate methods.
2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.
2014-03-01
Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.
Seismic Design of ITER Component Cooling Water System-1 Piping
NASA Astrophysics Data System (ADS)
Singh, Aditya P.; Jadhav, Mahesh; Sharma, Lalit K.; Gupta, Dinesh K.; Patel, Nirav; Ranjan, Rakesh; Gohil, Guman; Patel, Hiren; Dangi, Jinendra; Kumar, Mohit; Kumar, A. G. A.
2017-04-01
The successful performance of ITER machine very much depends upon the effective removal of heat from the in-vessel components and other auxiliary systems during Tokamak operation. This objective will be accomplished by the design of an effective Cooling Water System (CWS). The optimized piping layout design is an important element in CWS design and is one of the major design challenges owing to the factors of large thermal expansion and seismic accelerations; considering safety, accessibility and maintainability aspects. An important sub-system of ITER CWS, Component Cooling Water System-1 (CCWS-1) has very large diameter of pipes up to DN1600 with many intersections to fulfill the process flow requirements of clients for heat removal. Pipe intersection is the weakest link in the layout due to high stress intensification factor. CCWS-1 piping up to secondary confinement isolation valves as well as in-between these isolation valves need to survive a Seismic Level-2 (SL-2) earthquake during the Tokamak operation period to ensure structural stability of the system in the Safe Shutdown Earthquake (SSE) event. This paper presents the design, qualification and optimization of layout of ITER CCWS-1 loop to withstand SSE event combined with sustained and thermal loads as per the load combinations defined by ITER and allowable limits as per ASME B31.3, This paper also highlights the Modal and Response Spectrum Analyses done to find out the natural frequency and system behavior during the seismic event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Tong, Qing-zhen; Huang, Sheng; Wang, Yong
2013-11-01
An effective hierarchical reliable belief propagation (HRBP) decoding algorithm is proposed according to the structural characteristics of systematically constructed Gallager low-density parity-check (SCG-LDPC) codes. The novel decoding algorithm combines the layered iteration with the reliability judgment, and can greatly reduce the number of the variable nodes involved in the subsequent iteration process and accelerate the convergence rate. The result of simulation for SCG-LDPC(3969,3720) code shows that the novel HRBP decoding algorithm can greatly reduce the computing amount at the condition of ensuring the performance compared with the traditional belief propagation (BP) algorithm. The bit error rate (BER) of the HRBP algorithm is considerable at the threshold value of 15, but in the subsequent iteration process, the number of the variable nodes for the HRBP algorithm can be reduced by about 70% at the high signal-to-noise ratio (SNR) compared with the BP algorithm. When the threshold value is further increased, the HRBP algorithm will gradually degenerate into the layered-BP algorithm, but at the BER of 10-7 and the maximal iteration number of 30, the net coding gain (NCG) of the HRBP algorithm is 0.2 dB more than that of the BP algorithm, and the average iteration times can be reduced by about 40% at the high SNR. Therefore, the novel HRBP decoding algorithm is more suitable for optical communication systems.
Dynamic/Jitter Assessment of Multiple Potential HabEx Structural Designs
NASA Technical Reports Server (NTRS)
Knight, J. Brent; Stahl, H. Philip; Singleton, Andrew William; Hunt, Ronald A.; Therrell, Melissa F.; Caldwell, Mary Kathryn; Garcia, Jay Clarke
2017-01-01
The 2020 Decadal Survey in Astronomy and Astrophysics will assess candidate large missions to follow James Webb Space Telescope (JWST) and Wide Field Infrared Space Telescope (WFIRST). One candidate mission is the Habitable ExoPlanet Imaging Mission (HabEx). This presentation describes two HabEx structural designs and results from structural dynamic analyses performed to predict Primary Mirror (PM) Secondary Mirror (SM) Line of Site (LOS) stability (jitter) due to Reaction Wheel Assembly (RWA) vibrations.
5.0 Aerodynamic and Propulsive Decelerator Systems
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; Powell, Richard; Masciarelli, James; Brown, Glenn; Witkowski, Al; Guernsey, Carl
2005-01-01
Contents include the following: Introduction. Capability Breakdown Structure. Decelerator Functions. Candidate Solutions. Performance and Technology. Capability State-of-the-Art. Performance Needs. Candidate Configurations. Possible Technology Roadmaps. Capability Roadmaps.
Jamison, Christopher R; Badillo, Joseph J; Lipshultz, Jeffrey M; Comito, Robert J; MacMillan, David W C
2017-12-01
In nature, many organisms generate large families of natural product metabolites that have related molecular structures as a means to increase functional diversity and gain an evolutionary advantage against competing systems within the same environment. One pathway commonly employed by living systems to generate these large classes of structurally related families is oligomerization, wherein a series of enzymatically catalysed reactions is employed to generate secondary metabolites by iteratively appending monomers to a growing serial oligomer chain. The polypyrroloindolines are an interesting class of oligomeric natural products that consist of multiple cyclotryptamine subunits. Herein we describe an iterative application of asymmetric copper catalysis towards the synthesis of six distinct oligomeric polypyrroloindoline natural products: hodgkinsine, hodgkinsine B, idiospermuline, quadrigemine H and isopsychotridine B and C. Given the customizable nature of the small-molecule catalysts employed, we demonstrate that this strategy is further amenable to the construction of quadrigemine H-type alkaloids not isolated previously from natural sources.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
NASA Astrophysics Data System (ADS)
Jamison, Christopher R.; Badillo, Joseph J.; Lipshultz, Jeffrey M.; Comito, Robert J.; MacMillan, David W. C.
2017-12-01
In nature, many organisms generate large families of natural product metabolites that have related molecular structures as a means to increase functional diversity and gain an evolutionary advantage against competing systems within the same environment. One pathway commonly employed by living systems to generate these large classes of structurally related families is oligomerization, wherein a series of enzymatically catalysed reactions is employed to generate secondary metabolites by iteratively appending monomers to a growing serial oligomer chain. The polypyrroloindolines are an interesting class of oligomeric natural products that consist of multiple cyclotryptamine subunits. Herein we describe an iterative application of asymmetric copper catalysis towards the synthesis of six distinct oligomeric polypyrroloindoline natural products: hodgkinsine, hodgkinsine B, idiospermuline, quadrigemine H and isopsychotridine B and C. Given the customizable nature of the small-molecule catalysts employed, we demonstrate that this strategy is further amenable to the construction of quadrigemine H-type alkaloids not isolated previously from natural sources.
Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin
2017-10-23
Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
NASA Astrophysics Data System (ADS)
Litnovsky, A.; Philipps, V.; Wienhold, P.; Krieger, K.; Kirschner, A.; Borodin, D.; Sergienko, G.; Schmitz, O.; Kreter, A.; Samm, U.; Richter, S.; Breuer, U.; Textor Team
2009-04-01
Castellation is foreseen for the first wall and divertor area in ITER. The concern of the fuel accumulation and impurity deposition in the gaps of castellated structures calls for dedicated studies. Recently, a tungsten castellated limiter with rectangular and roof-like shaped cells was exposed to the SOL plasmas in TEXTOR. After exposure, roughly two times less fuel was found in the gaps between the shaped cells whereas the difference in carbon deposition was less pronounced. Up to 70 at.% of tungsten was found intermixed in the deposited layers in the gaps. The metal fraction in the deposit decreases rapidly with a depth of the gap. Modeling of carbon deposition in poloidal gaps has provided a qualitative agreement with an experiment. The significant anisotropy of C and D distributions in the toroidal gaps was measured.
NASA Astrophysics Data System (ADS)
Karaoǧlu, Haydar; Romanowicz, Barbara
2018-06-01
We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the southwestern Pacific and eastern Africa, while low attenuation zones fade beneath most of the cratons. The strong negative correlation of Q^{-1}_μ and VS anomalies at shallow upper-mantle depths points to a common dominant origin for the two, likely due to variations in thermal structure. A comparison with two other global upper-mantle attenuation models shows promising consistency. As we updated the elastic 3-D model in alternate iterations, we found that the VS part of the model was stable, while the ξ structure evolution was more pronounced, indicating that it may be important to include 3-D attenuation effects when inverting for ξ, possibly due to the influence of dispersion corrections on this less well-constrained parameter.
Schenk, Emily R; Nau, Frederic; Fernandez-Lima, Francisco
2015-06-01
The ability to correlate experimental ion mobility data with candidate structures from theoretical modeling provides a powerful analytical and structural tool for the characterization of biomolecules. In the present paper, a theoretical workflow is described to generate and assign candidate structures for experimental trapped ion mobility and H/D exchange (HDX-TIMS-MS) data following molecular dynamics simulations and statistical filtering. The applicability of the theoretical predictor is illustrated for a peptide and protein example with multiple conformations and kinetic intermediates. The described methodology yields a low computational cost and a simple workflow by incorporating statistical filtering and molecular dynamics simulations. The workflow can be adapted to different IMS scenarios and CCS calculators for a more accurate description of the IMS experimental conditions. For the case of the HDX-TIMS-MS experiments, molecular dynamics in the "TIMS box" accounts for a better sampling of the molecular intermediates and local energy minima.
Design, analysis and test verification of advanced encapsulation systems
NASA Technical Reports Server (NTRS)
Garcia, A., III; Kallis, J. M.; Trucker, D. C.
1983-01-01
Analytical models were developed to perform optical, thermal, electrical and structural analyses on candidate encapsulation systems. From these analyses several candidate encapsulation systems were selected for qualification testing.
NASA Astrophysics Data System (ADS)
Flores, A. N.; Pathak, C. S.; Senarath, S. U.; Bras, R. L.
2009-12-01
Robust hydrologic monitoring networks represent a critical element of decision support systems for effective water resource planning and management. Moreover, process representation within hydrologic simulation models is steadily improving, while at the same time computational costs are decreasing due to, for instance, readily available high performance computing resources. The ability to leverage these increasingly complex models together with the data from these monitoring networks to provide accurate and timely estimates of relevant hydrologic variables within a multiple-use, managed water resources system would substantially enhance the information available to resource decision makers. Numerical data assimilation techniques provide mathematical frameworks through which uncertain model predictions can be constrained to observational data to compensate for uncertainties in the model forcings and parameters. In ensemble-based data assimilation techniques such as the ensemble Kalman Filter (EnKF), information in observed variables such as canal, marsh and groundwater stages are propagated back to the model states in a manner related to: (1) the degree of certainty in the model state estimates and observations, and (2) the cross-correlation between the model states and the observable outputs of the model. However, the ultimate degree to which hydrologic conditions can be accurately predicted in an area of interest is controlled, in part, by the configuration of the monitoring network itself. In this proof-of-concept study we developed an approach by which the design of an existing hydrologic monitoring network is adapted to iteratively improve the predictions of hydrologic conditions within an area of the South Florida Water Management District (SFWMD). The objective of the network design is to minimize prediction errors of key hydrologic states and fluxes produced by the spatially distributed Regional Simulation Model (RSM), developed specifically to simulate the hydrologic conditions in several intensively managed and hydrologically complex watersheds within the SFWMD system. In a series of synthetic experiments RSM is used to generate the notionally true hydrologic state and the relevant observational data. The EnKF is then used as the mechanism to fuse RSM hydrologic estimates with data from the candidate network. The performance of the candidate network is measured by the prediction errors of the EnKF estimates of hydrologic states, relative to the notionally true scenario. The candidate network is then adapted by relocating existing observational sites to unobserved areas where predictions of local hydrologic conditions are most uncertain and the EnKF procedure repeated. Iteration of the monitoring network continues until further improvements in EnKF-based predictions of hydrologic conditions are negligible.
NASA Astrophysics Data System (ADS)
Sendek, Austin D.; Yang, Qian; Cubuk, Ekin D.; Duerloo, Karel-Alexander N.; Cui, Yi; Reed, Evan J.
We present a new type of large-scale computational screening approach for identifying promising candidate materials for solid state electrolytes for lithium ion batteries that is capable of screening all known lithium containing solids. To predict the likelihood of a candidate material exhibiting high lithium ion conductivity, we leverage machine learning techniques to train an ionic conductivity classification model using logistic regression based on experimental measurements reported in the literature. This model, which is built on easily calculable atomistic descriptors, provides new insight into the structure-property relationship for superionic behavior in solids and is approximately one million times faster to evaluate than DFT-based approaches to calculating diffusion coefficients or migration barriers. We couple this model with several other technologically motivated heuristics to reduce the list of candidate materials from the more than 12,000 known lithium containing solids to 21 structures that show promise as electrolytes, few of which have been examined experimentally. Our screening utilizes structures and electronic information contained in the Materials Project database. This work is supported by an Office of Technology Licensing Fellowship through the Stanford Graduate Fellowship Program and a seed Grant from the TomKat Center for Sustainable Energy at Stanford.
Hrabosky, Joshua I.; White, Marney A.; Masheb, Robin M.; Rothschild, Bruce S.; Burke-Martindale, Carolyn H.; Grilo, Carlos M.
2013-01-01
Objective Despite increasing use of the Eating Disorder Examination-Questionnaire (EDE-Q) in bariatric surgery patients, little is known about the utility and psychometric performance of this self-report measure in this clinical group. The primary purpose of the current study was to evaluate the factor structure and construct validity of the EDE-Q in a large series of bariatric surgery candidates. Methods and Procedures Participants were 337 obese bariatric surgery candidates. Participants completed the EDE-Q and a battery of behavioral and psychological measures. Results Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) produced a 12-item, 4-factor structure of the EDE-Q. The four factors, interpreted as Dietary Restraint, Eating Disturbance, Appearance Concerns, and Shape/Weight Overvaluation, were found to be internally consistent and converged with other relevant measures of psychopathology. Discussion Factor analysis of the EDE-Q in bariatric surgery candidates did not replicate the original subscales but revealed an alternative factor structure. Future research must further evaluate the psychometric properties, including the factor structure, of the EDE-Q in this and other diverse populations and consider means of improving this measure's ability to best assess eating-related pathology in bariatric surgery patients. PMID:18379561
Kocabaş, Tuğbey; Çakır, Deniz; Gülseren, Oğuz; Ay, Feridun; Kosku Perkgöz, Nihan; Sevik, Cem
2018-04-26
The investigation of thermal transport properties of novel two-dimensional materials is crucially important in order to assess their potential to be used in future technological applications, such as thermoelectric power generation. In this respect, the lattice thermal transport properties of the monolayer structures of group VA elements (P, As, Sb, Bi, PAs, PSb, PBi, AsSb, AsBi, SbBi, P3As1, P3Sb1, P1As3, and As3Sb1) with a black phosphorus like puckered structure were systematically investigated by first-principles calculations and an iterative solution of the phonon Boltzmann transport equation. Phosphorene was found to have the highest lattice thermal conductivity, κ, due to its low average atomic mass and strong interatomic bonding character. As a matter of course, anisotropic κ was obtained for all the considered materials, owing to anisotropy in frequency values and phonon group velocities calculated for these structures. However, the determined linear correlation between the anisotropy in the κ values of P, As, and Sb is significant. The results corresponding to the studied compound structures clearly point out that thermal (electronic) conductivity of pristine monolayers might be suppressed (improved) by alloying them with the same group elements. For instance, the room temperature κ of PBi along the armchair direction was predicted to be as low as 1.5 W m-1 K-1, whereas that of P was predicted to be 21 W m-1 K-1. In spite of the apparent differences in structural and vibrational properties, we peculiarly revealed an intriguing correlation between the κ values of all the considered materials as κ = c1 + c2/m2, in particular along the zigzag direction. Furthermore, our calculations on compound structures clearly showed that the thermoelectric potential of these materials can be improved by suppressing their thermal properties. The presence of ultra-low κ values and high electrical conductivity (especially along the armchair direction) makes this class of monolayers promising candidates for thermoelectric applications.
McGraw, Caroline; Abbott, Stephen; Brook, Judy
2018-02-19
Values based recruitment emerges from the premise that a high degree of value congruence, or the extent to which an individual's values are similar to those of the health organization in which they work, leads to organizational effectiveness. The aim of this evaluation was to explore how candidates and selection panel members experienced and perceived innovative methods of values based public health nursing student selection. The evaluation was framed by a qualitative exploratory design involving semi-structured interviews and a group exercise. Data were thematically analyzed. Eight semi-structured interviews were conducted with selection panel members. Twenty-two successful candidates took part in a group exercise. The use of photo elicitation interviews and situational judgment questions in the context of selection to a university-run public health nursing educational program was explored. While candidates were ambivalent about the use of photo elicitation interviews, with some misunderstanding the task, selection panel members saw the benefits for improving candidate expression and reducing gaming and deception. Situational interview questions were endorsed by candidates and selection panel members due to their fidelity to real-life problems and the ability of panel members to discern value congruence from candidates' responses. Both techniques offered innovative solutions to candidate selection for entry to the public health nursing education program. © 2018 Wiley Periodicals, Inc.
Interactive computer graphics system for structural sizing and analysis of aircraft structures
NASA Technical Reports Server (NTRS)
Bendavid, D.; Pipano, A.; Raibstein, A.; Somekh, E.
1975-01-01
A computerized system for preliminary sizing and analysis of aircraft wing and fuselage structures was described. The system is based upon repeated application of analytical program modules, which are interactively interfaced and sequence-controlled during the iterative design process with the aid of design-oriented graphics software modules. The entire process is initiated and controlled via low-cost interactive graphics terminals driven by a remote computer in a time-sharing mode.
Automatic Compound Annotation from Mass Spectrometry Data Using MAGMa
Ridder, Lars; van der Hooft, Justin J. J.; Verhoeven, Stefan
2014-01-01
The MAGMa software for automatic annotation of mass spectrometry based fragmentation data was applied to 16 MS/MS datasets of the CASMI 2013 contest. Eight solutions were submitted in category 1 (molecular formula assignments) and twelve in category 2 (molecular structure assignment). The MS/MS peaks of each challenge were matched with in silico generated substructures of candidate molecules from PubChem, resulting in penalty scores that were used for candidate ranking. In 6 of the 12 submitted solutions in category 2, the correct chemical structure obtained the best score, whereas 3 molecules were ranked outside the top 5. All top ranked molecular formulas submitted in category 1 were correct. In addition, we present MAGMa results generated retrospectively for the remaining challenges. Successful application of the MAGMa algorithm required inclusion of the relevant candidate molecules, application of the appropriate mass tolerance and a sufficient degree of in silico fragmentation of the candidate molecules. Furthermore, the effect of the exhaustiveness of the candidate lists and limitations of substructure based scoring are discussed. PMID:26819876
Automatic Compound Annotation from Mass Spectrometry Data Using MAGMa.
Ridder, Lars; van der Hooft, Justin J J; Verhoeven, Stefan
2014-01-01
The MAGMa software for automatic annotation of mass spectrometry based fragmentation data was applied to 16 MS/MS datasets of the CASMI 2013 contest. Eight solutions were submitted in category 1 (molecular formula assignments) and twelve in category 2 (molecular structure assignment). The MS/MS peaks of each challenge were matched with in silico generated substructures of candidate molecules from PubChem, resulting in penalty scores that were used for candidate ranking. In 6 of the 12 submitted solutions in category 2, the correct chemical structure obtained the best score, whereas 3 molecules were ranked outside the top 5. All top ranked molecular formulas submitted in category 1 were correct. In addition, we present MAGMa results generated retrospectively for the remaining challenges. Successful application of the MAGMa algorithm required inclusion of the relevant candidate molecules, application of the appropriate mass tolerance and a sufficient degree of in silico fragmentation of the candidate molecules. Furthermore, the effect of the exhaustiveness of the candidate lists and limitations of substructure based scoring are discussed.
Study of rubella candidate vaccine based on a structurally modified plant virus.
Trifonova, Ekaterina A; Zenin, Vladimir A; Nikitin, Nikolai A; Yurkova, Maria S; Ryabchevskaya, Ekaterina M; Putlyaev, Egor V; Donchenko, Ekaterina K; Kondakova, Olga A; Fedorov, Alexey N; Atabekov, Joseph G; Karpova, Olga V
2017-08-01
A novel rubella candidate vaccine based on a structurally modified plant virus - spherical particles (SPs) - was developed. SPs generated by the thermal remodelling of the tobacco mosaic virus are promising platforms for the development of vaccines. SPs combine unique properties: biosafety, stability, high immunogenicity and the effective adsorption of antigens. We assembled in vitro and characterised complexes (candidate vaccine) based on SPs and the rubella virus recombinant antigen. The candidate vaccine induced a strong humoral immune response against rubella. The IgG isotypes ratio indicated the predominance of IgG1 which plays a key role in immunity to natural rubella infection. The immune response was generally directed against the rubella antigen within the complexes. We suggest that SPs can act as a platform (depot) for the rubella antigen, enhancing specific immune response. Our results demonstrate that SPs-antigen complexes can be an effective and safe candidate vaccine against rubella. Copyright © 2017 Elsevier B.V. All rights reserved.
Development and Feasibility of a Structured Goals of Care Communication Guide.
Bekelman, David B; Johnson-Koenke, Rachel; Ahluwalia, Sangeeta C; Walling, Anne M; Peterson, Jamie; Sudore, Rebecca L
2017-09-01
Discussing goals of care and advance care planning is beneficial, yet how to best integrate goals of care communication into clinical care remains unclear. To develop and determine the feasibility of a structured goals of care communication guide for nurses and social workers. Developmental study with providers in an academic and Veterans Affairs (VA) health system (n = 42) and subsequent pilot testing with patients with chronic obstructive pulmonary disease or heart failure (n = 15) and informal caregivers (n = 4) in a VA health system. During pilot testing, the communication guide was administered, followed by semistructured, open-ended questions about the content and process of communication. Changes to the guide were made iteratively, and subsequent piloting occurred until no additional changes emerged. Provider and patient feedback to the communication guide. Iterative input resulted in the goals of care communication guide. The guide included questions to elicit patient understanding of and attitudes toward the future of illness, clarify values and goals, identify end-of-life preferences, and agree on a follow-up plan. Revisions to guide content and phrasing continued during development and pilot testing. In pilot testing, patients validated the importance of the topic; none said the goals of care discussion should not be conducted. Patients and informal caregivers liked the final guide length (∼30 minutes), felt it flowed well, and was clear. In this developmental and pilot study, a structured goals of care communication guide was iteratively designed, implemented by nurses and social workers, and was feasible based on administration time and acceptability by patients and providers.
ERIC Educational Resources Information Center
Brown, Julie C.; Crippen, Kent J.
2016-01-01
This study represents a first iteration in the design process of the Growing Awareness Inventory (GAIn), a structured observation protocol for building the awareness of preservice teachers (PSTs) for resources in mathematics and science classrooms that can be used for culturally responsive pedagogy (CRP). The GAIn is designed to develop awareness…
Thirumalai, D; Hyeon, Changbong
2018-06-19
Signal transmission at the molecular level in many biological complexes occurs through allosteric transitions. Allostery describes the responses of a complex to binding of ligands at sites that are spatially well separated from the binding region. We describe the structural perturbation method, based on phonon propagation in solids, which can be used to determine the signal-transmitting allostery wiring diagram (AWD) in large but finite-sized biological complexes. Application to the bacterial chaperonin GroEL-GroES complex shows that the AWD determined from structures also drives the allosteric transitions dynamically. From both a structural and dynamical perspective these transitions are largely determined by formation and rupture of salt-bridges. The molecular description of allostery in GroEL provides insights into its function, which is quantitatively described by the iterative annealing mechanism. Remarkably, in this complex molecular machine, a deep connection is established between the structures, reaction cycle during which GroEL undergoes a sequence of allosteric transitions, and function, in a self-consistent manner.This article is part of a discussion meeting issue 'Allostery and molecular machines'. © 2018 The Author(s).
An efficient numerical algorithm for transverse impact problems
NASA Technical Reports Server (NTRS)
Sankar, B. V.; Sun, C. T.
1985-01-01
Transverse impact problems in which the elastic and plastic indentation effects are considered, involve a nonlinear integral equation for the contact force, which, in practice, is usually solved by an iterative scheme with small increments in time. In this paper, a numerical method is proposed wherein the iterations of the nonlinear problem are separated from the structural response computations. This makes the numerical procedures much simpler and also efficient. The proposed method is applied to some impact problems for which solutions are available, and they are found to be in good agreement. The effect of the magnitude of time increment on the results is also discussed.
NASA Astrophysics Data System (ADS)
Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime
2017-09-01
After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.
Discrete Fourier Transform in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H. (Inventor)
2015-01-01
An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.
Assessment of chest CT at CTDIvol less than 1 mGy with iterative reconstruction techniques.
Padole, Atul; Digumarthy, Subba; Flores, Efren; Madan, Rachna; Mishra, Shelly; Sharma, Amita; Kalra, Mannudeep K
2017-03-01
To assess the image quality of chest CT reconstructed with image-based iterative reconstruction (SafeCT; MedicVision ® , Tirat Carmel, Israel), adaptive statistical iterative reconstruction (ASIR; GE Healthcare, Waukesha, WI) and model-based iterative reconstruction (MBIR; GE Healthcare, Waukesha, WI) techniques at CT dose index volume (CTDI vol ) <1 mGy. In an institutional review board-approved study, 25 patients gave written informed consent for acquisition of three reduced dose (0.25-, 0.4- and 0.8-mGy) chest CT after standard of care CT (8 mGy) on a 64-channel multidetector CT (MDCT) and reconstructed with SafeCT, ASIR and MBIR. Two board-certified thoracic radiologists evaluated images from the lowest to the highest dose of the reduced dose CT series and subsequently for standard of care CT. Out of the 182 detected lesions, the missed lesions were 35 at 0.25, 24 at 0.4 and 9 at 0.8 mGy with SafeCT, ASIR and MBIR, respectively. The most missed lesions were non-calcified lung nodules (NCLNs) 25/112 (<5 mm) at 0.25, 18/112 (<5 mm) at 0.4 and 3/112 (<4 mm) at 0.8 mGy. There were 78%, 84% and 97% lung nodules detected at 0.25, 0.4 and 0.8 mGy, respectively regardless of iterative reconstruction techniques (IRTs), Most mediastinum structures were not sufficiently seen at 0.25-0.8 mGy. NCLNs can be missed in chest CT at CTDI vol of <1 mGy (0.25, 0.4 and 0.8 mGy) regardless of IRTs. The most lung nodules (97%) were detected at CTDI vol of 0.8 mGy. The most mediastinum structures were not sufficiently seen at 0.25-0.8 mGy. Advances in knowledge: NCLNs can be missed regardless of IRTs in chest CT at CTDI vol of <1 mGy. The performance of ASIR, SafeCT and MBIR was similar for lung nodule detection at 0.25, 0.4 and 0.8 mGy.
Some recent experimental results related to nuclear chirality
NASA Astrophysics Data System (ADS)
Timár, J.; Kuti, I.; Sohler, D.; Starosta, K.; Koike, T.; Paul, E. S.
2014-09-01
Detailed band structures of three chiral-candidate nuclei, 134Pr, 132La and 103Rh have been studied. The aim of the study was twofold. First, to try to explore the reasons behind the contradiction between the theoretically predicted chirality in these nuclei and the recently observed fingerprints that suggest non-chiral interpretation for the previous chiral candidate band doublets. Second, to search for multiple chiral bands of different types in these nuclei. In 134Pr a new πh11/2vh11/2 band has been observed besides the previously known chiral-candidate πh11/2vh11/2 doublet. This new band and the yrare πh11/2vh11/2 band show the expected features of a chiral doublet structure. This fact combined with the observed similarity between the band structures of 134Pr and 132La suggests that chirality might exist in these nuclei. The detailed study of the 103Rh band structure resulted in the observation of two new chiral-doublet looking structures besides the previously known one. This is indicative of possible existence of multiple chiral doublet structure in this nucleus.
New Bandwidth Efficient Parallel Concatenated Coding Schemes
NASA Technical Reports Server (NTRS)
Denedetto, S.; Divsalar, D.; Montorsi, G.; Pollara, F.
1996-01-01
We propose a new solution to parallel concatenation of trellis codes with multilevel amplitude/phase modulations and a suitable iterative decoding structure. Examples are given for throughputs 2 bits/sec/Hz with 8PSK and 16QAM signal constellations.
ERIC Educational Resources Information Center
Slabon, Wayne A.; Richards, Randy L.; Dennen, Vanessa P.
2014-01-01
In this paper, we introduce restorying, a pedagogical approach based on social constructivism that employs successive iterations of rewriting and discussing personal, student-generated, domain-relevant stories to promote conceptual application, critical thinking, and ill-structured problem solving skills. Using a naturalistic, qualitative case…
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2017-02-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
NASA Astrophysics Data System (ADS)
Zotos, Euaggelos E.
2018-06-01
The circular Sitnikov problem, where the two primary bodies are prolate or oblate spheroids, is numerically investigated. In particular, the basins of convergence on the complex plane are revealed by using a large collection of numerical methods of several order. We consider four cases, regarding the value of the oblateness coefficient which determines the nature of the roots (attractors) of the system. For all cases we use the iterative schemes for performing a thorough and systematic classification of the nodes on the complex plane. The distribution of the iterations as well as the probability and their correlations with the corresponding basins of convergence are also discussed. Our numerical computations indicate that most of the iterative schemes provide relatively similar convergence structures on the complex plane. However, there are some numerical methods for which the corresponding basins of attraction are extremely complicated with highly fractal basin boundaries. Moreover, it is proved that the efficiency strongly varies between the numerical methods.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
Imaging complex objects using learning tomography
NASA Astrophysics Data System (ADS)
Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri
2018-02-01
Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.
Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isaev, M. Yu., E-mail: isaev-my@nrcki.ru; Medvedev, S. Yu.; Cooper, W. A.
A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1.more » The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.« less
An Automated Baseline Correction Method Based on Iterative Morphological Operations.
Chen, Yunliang; Dai, Liankui
2018-05-01
Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.
Iterative reactions of transient boronic acids enable sequential C-C bond formation
NASA Astrophysics Data System (ADS)
Battilocchio, Claudio; Feist, Florian; Hafner, Andreas; Simon, Meike; Tran, Duc N.; Allwood, Daniel M.; Blakemore, David C.; Ley, Steven V.
2016-04-01
The ability to form multiple carbon-carbon bonds in a controlled sequence and thus rapidly build molecular complexity in an iterative fashion is an important goal in modern chemical synthesis. In recent times, transition-metal-catalysed coupling reactions have dominated in the development of C-C bond forming processes. A desire to reduce the reliance on precious metals and a need to obtain products with very low levels of metal impurities has brought a renewed focus on metal-free coupling processes. Here, we report the in situ preparation of reactive allylic and benzylic boronic acids, obtained by reacting flow-generated diazo compounds with boronic acids, and their application in controlled iterative C-C bond forming reactions is described. Thus far we have shown the formation of up to three C-C bonds in a sequence including the final trapping of a reactive boronic acid species with an aldehyde to generate a range of new chemical structures.
Identification of spatially-localized initial conditions via sparse PCA
NASA Astrophysics Data System (ADS)
Dwivedi, Anubhav; Jovanovic, Mihailo
2017-11-01
Principal Component Analysis involves maximization of a quadratic form subject to a quadratic constraint on the initial flow perturbations and it is routinely used to identify the most energetic flow structures. For general flow configurations, principal components can be efficiently computed via power iteration of the forward and adjoint governing equations. However, the resulting flow structures typically have a large spatial support leading to a question of physical realizability. To obtain spatially-localized structures, we modify the quadratic constraint on the initial condition to include a convex combination with an additional regularization term which promotes sparsity in the physical domain. We formulate this constrained optimization problem as a nonlinear eigenvalue problem and employ an inverse power-iteration-based method to solve it. The resulting solution is guaranteed to converge to a nonlinear eigenvector which becomes increasingly localized as our emphasis on sparsity increases. We use several fluids examples to demonstrate that our method indeed identifies the most energetic initial perturbations that are spatially compact. This work was supported by Office of Naval Research through Grant Number N00014-15-1-2522.
Chatterji, Madhabi
2002-01-01
This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.
Selvin, Joseph; Sathiyanarayanan, Ganesan; Lipton, Anuj N.; Al-Dhabi, Naif Abdullah; Valan Arasu, Mariadhas; Kiran, George S.
2016-01-01
The important biological macromolecules, such as lipopeptide and glycolipid biosurfactant producing marine actinobacteria were analyzed and their potential linkage between type II polyketide synthase (PKS) genes was explored. A unique feature of type II PKS genes is their high amino acid (AA) sequence homology and conserved gene organization. These enzymes mediate the biosynthesis of polyketide natural products with enormous structural complexity and chemical nature by combinatorial use of various domains. Therefore, deciphering the order of AA sequence encoded by PKS domains tailored the chemical structure of polyketide analogs still remains a great challenge. The present work deals with an in vitro and in silico analysis of PKS type II genes from five actinobacterial species to correlate KS domain architecture and structural features. Our present analysis reveals the unique protein domain organization of iterative type II PKS and KS domain of marine actinobacteria. The findings of this study would have implications in metabolic pathway reconstruction and design of semi-synthetic genomes to achieve rational design of novel natural products. PMID:26903957
Improving cluster-based missing value estimation of DNA microarray data.
Brás, Lígia P; Menezes, José C
2007-06-01
We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
Coevolutionary patterning of teeth and taste buds
Bloomquist, Ryan F.; Parnell, Nicholas F.; Phillips, Kristine A.; Fowler, Teresa E.; Yu, Tian Y.; Sharpe, Paul T.; Streelman, J. Todd
2015-01-01
Teeth and taste buds are iteratively patterned structures that line the oro-pharynx of vertebrates. Biologists do not fully understand how teeth and taste buds develop from undifferentiated epithelium or how variation in organ density is regulated. These organs are typically studied independently because of their separate anatomical location in mammals: teeth on the jaw margin and taste buds on the tongue. However, in many aquatic animals like bony fishes, teeth and taste buds are colocalized one next to the other. Using genetic mapping in cichlid fishes, we identified shared loci controlling a positive correlation between tooth and taste bud densities. Genome intervals contained candidate genes expressed in tooth and taste bud fields. sfrp5 and bmper, notable for roles in Wingless (Wnt) and bone morphogenetic protein (BMP) signaling, were differentially expressed across cichlid species with divergent tooth and taste bud density, and were expressed in the development of both organs in mice. Synexpression analysis and chemical manipulation of Wnt, BMP, and Hedgehog (Hh) pathways suggest that a common cichlid oral lamina is competent to form teeth or taste buds. Wnt signaling couples tooth and taste bud density and BMP and Hh mediate distinct organ identity. Synthesizing data from fish and mouse, we suggest that the Wnt-BMP-Hh regulatory hierarchy that configures teeth and taste buds on mammalian jaws and tongues may be an evolutionary remnant inherited from ancestors wherein these organs were copatterned from common epithelium. PMID:26483492
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Small Molecule Deubiquitinase Inhibitors Promote Macrophage Anti-Infective Capacity
Charbonneau, Marie-Eve; Gonzalez-Hernandez, Marta J.; Showalter, Hollis D.; Donato, Nicholas J.; Wobus, Christiane E.; O’Riordan, Mary X. D.
2014-01-01
The global spread of anti-microbial resistance requires urgent attention, and diverse alternative strategies have been suggested to address this public health concern. Host-directed immunomodulatory therapies represent one approach that could reduce selection for resistant bacterial strains. Recently, the small molecule deubiquitinase inhibitor WP1130 was reported as a potential anti-infective drug against important human food-borne pathogens, notably Listeria monocytogenes and noroviruses. Utilization of WP1130 itself is limited due to poor solubility, but given the potential of this new compound, we initiated an iterative rational design approach to synthesize new derivatives with increased solubility that retained anti-infective activity. Here, we test a small library of novel synthetic molecules based on the structure of the parent compound, WP1130, for anti-infective activity in vitro. Our studies identify a promising candidate, compound 9, which reduced intracellular growth of L. monocytogenes at concentrations that caused minimal cellular toxicity. Compound 9 itself had no bactericidal activity and only modestly slowed Listeria growth rate in liquid broth culture, suggesting that this drug acts as an anti-infective compound by modulating host-cell function. Moreover, this new compound also showed anti-infective activity against murine norovirus (MNV-1) and human norovirus, using the Norwalk virus replicon system. This small molecule inhibitor may provide a chemical platform for further development of therapeutic deubiquitinase inhibitors with broad-spectrum anti-infective activity. PMID:25093325
Coevolutionary patterning of teeth and taste buds.
Bloomquist, Ryan F; Parnell, Nicholas F; Phillips, Kristine A; Fowler, Teresa E; Yu, Tian Y; Sharpe, Paul T; Streelman, J Todd
2015-11-03
Teeth and taste buds are iteratively patterned structures that line the oro-pharynx of vertebrates. Biologists do not fully understand how teeth and taste buds develop from undifferentiated epithelium or how variation in organ density is regulated. These organs are typically studied independently because of their separate anatomical location in mammals: teeth on the jaw margin and taste buds on the tongue. However, in many aquatic animals like bony fishes, teeth and taste buds are colocalized one next to the other. Using genetic mapping in cichlid fishes, we identified shared loci controlling a positive correlation between tooth and taste bud densities. Genome intervals contained candidate genes expressed in tooth and taste bud fields. sfrp5 and bmper, notable for roles in Wingless (Wnt) and bone morphogenetic protein (BMP) signaling, were differentially expressed across cichlid species with divergent tooth and taste bud density, and were expressed in the development of both organs in mice. Synexpression analysis and chemical manipulation of Wnt, BMP, and Hedgehog (Hh) pathways suggest that a common cichlid oral lamina is competent to form teeth or taste buds. Wnt signaling couples tooth and taste bud density and BMP and Hh mediate distinct organ identity. Synthesizing data from fish and mouse, we suggest that the Wnt-BMP-Hh regulatory hierarchy that configures teeth and taste buds on mammalian jaws and tongues may be an evolutionary remnant inherited from ancestors wherein these organs were copatterned from common epithelium.
Small molecule deubiquitinase inhibitors promote macrophage anti-infective capacity.
Charbonneau, Marie-Eve; Gonzalez-Hernandez, Marta J; Showalter, Hollis D; Donato, Nicholas J; Wobus, Christiane E; O'Riordan, Mary X D
2014-01-01
The global spread of anti-microbial resistance requires urgent attention, and diverse alternative strategies have been suggested to address this public health concern. Host-directed immunomodulatory therapies represent one approach that could reduce selection for resistant bacterial strains. Recently, the small molecule deubiquitinase inhibitor WP1130 was reported as a potential anti-infective drug against important human food-borne pathogens, notably Listeria monocytogenes and noroviruses. Utilization of WP1130 itself is limited due to poor solubility, but given the potential of this new compound, we initiated an iterative rational design approach to synthesize new derivatives with increased solubility that retained anti-infective activity. Here, we test a small library of novel synthetic molecules based on the structure of the parent compound, WP1130, for anti-infective activity in vitro. Our studies identify a promising candidate, compound 9, which reduced intracellular growth of L. monocytogenes at concentrations that caused minimal cellular toxicity. Compound 9 itself had no bactericidal activity and only modestly slowed Listeria growth rate in liquid broth culture, suggesting that this drug acts as an anti-infective compound by modulating host-cell function. Moreover, this new compound also showed anti-infective activity against murine norovirus (MNV-1) and human norovirus, using the Norwalk virus replicon system. This small molecule inhibitor may provide a chemical platform for further development of therapeutic deubiquitinase inhibitors with broad-spectrum anti-infective activity.
High temperature arc-track resistant aerospace insulation
NASA Technical Reports Server (NTRS)
Dorogy, William
1994-01-01
The topics are presented in viewgraph form and include the following: high temperature aerospace insulation; Foster-Miller approach to develop a 300 C rated, arc-track resistant aerospace insulation; advantages and disadvantages of key structural features; summary goals and achievements of the phase 1 program; performance goals for selected materials; materials under evaluation; molecular structures of candidate polymers; candidate polymer properties; film properties; and a detailed program plan.
Barth, Vanessa; Need, Anne
2014-12-17
Nuclear medicine imaging biomarker applications are limited by the radiotracers available. Radiotracers enable the measurement of target engagement, or occupancy in relation to plasma exposure. These tracers can also be used as pharmacodynamic biomarkers to demonstrate functional consequences of binding a target. More recently, radiotracers have also been used for patient tailoring in Alzheimer's disease seen with amyloid imaging. Radiotracers for the central nervous system (CNS) are challenging to identify, as they require a unique intersection of multiple properties. Recent advances in tangential technologies, along with the use of iterative learning for the purposes of deriving in silico models, have opened up additional opportunities to identify radiotracers. Mass spectral technologies and in silico modeling have made it possible to measure and predict in vivo characteristics of molecules to indicate potential tracer performance. By analyzing these data alongside other measures, it is possible to delineate guidelines to increase the likelihood of selecting compounds that can perform as radiotracers or serve as the best starting point to develop a radiotracer following additional structural modification. The application of mass spectrometry based technologies is an efficient way to evaluate compounds as tracers in vivo, but more importantly enables the testing of potential tracers that have either no label site or complex labeling chemistry which may deter assessment by traditional means; therefore, use of this technology allows for more rapid iterative learning. The ability to differentially distribute toward target rich tissues versus tissue with no/less target present is a unique defining feature of a tracer. By testing nonlabeled compounds in vivo and analyzing tissue levels by LC-MS/MS, rapid assessment of a compound's ability to differentially distribute in a manner consistent with target expression biology guides the focus of chemistry resources for both designing and labeling tracer candidates. LC-MS/MS has only recently been used for de novo tracer identification; however, this connection of mass spectral technology to imaging has initiated engagement from a wider community that brings diverse backgrounds into the tracer discovery arena.
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budaev, V. P., E-mail: budaev@mail.ru; Martynenko, Yu. V.; Khimchenko, L. N.
Targets made of ITER-grade 316L(N)-IG stainless steel and Russian-grade 12Cr18Ni10Ti stainless steel with a close composition were exposed at the QSPA-T plasma gun to plasma photonic radiation pulses simulating conditions of disruption mitigation in ITER. After a large number of pulses, modification of the stainless-steel surface was observed, such as the formation of a wavy structure, irregular roughness, and cracks on the target surface. X-ray and optic microscopic analyses of targets revealed changes in the orientation and dimensions of crystallites (grains) over a depth of up to 20 μm for 316L(N)-IG stainless steel after 200 pulses and up to 40more » μm for 12Cr18Ni10Ti stainless steel after 50 pulses, which is significantly larger than the depth of the layer melted in one pulse (∼10 μm). In a series of 200 tests of ITER-grade 316L(N)-IG ITER stainless steel, a linear increase in the height of irregularity (roughness) with increasing number of pulses at a rate of up to ∼1 μm per pulse was observed. No alteration in the chemical composition of the stainless-steel surface in the series of tests was revealed. A model is developed that describes the formation of wavy irregularities on the melted metal surface with allowance for the nonlinear stage of instability of the melted layer with a vapor/plasma flow above it. A decisive factor in this case is the viscous flow of the melted metal from the troughs to tops of the wavy structure. The model predicts saturation of the growth of the wavy structure when its amplitude becomes comparable with its wavelength. Approaches to describing the observed stochastic relief and roughness of the stainless-steel surface formed in the series of tests are considered. The recurrence of the melting-solidification process in which mechanisms of the hill growth compete with the spreading of the material from the hills can result in the formation of a stochastic relief.« less
2.5D transient electromagnetic inversion with OCCAM method
NASA Astrophysics Data System (ADS)
Li, R.; Hu, X.
2016-12-01
In the application of time-domain electromagnetic method (TEM), some multidimensional inversion schemes are applied for imaging in the past few decades to overcome great error produced by 1D model inversion when the subsurface structure is complex. The current mainstream multidimensional inversion for EM data, with the finite-difference time-domain (FDTD) forward method, mainly implemented by Nonlinear Conjugate Gradient (NLCG). But the convergence rate of NLCG heavily depends on Lagrange multiplier and maybe fail to converge. We use the OCCAM inversion method to avoid the weakness. OCCAM inversion is proven to be a more stable and reliable method to image the subsurface 2.5D electrical conductivity. Firstly, we simulate the 3D transient EM fields governed by Maxwell's equations with FDTD method. Secondly, we use the OCCAM inversion scheme with the appropriate objective error functional we established to image the 2.5D structure. And the data space OCCAM's inversion (DASOCC) strategy based on OCCAM scheme were given in this paper. The sensitivity matrix is calculated with the method of time-integrated back-propagated fields. Imaging result of example model shown in Fig. 1 have proven that the OCCAM scheme is an efficient inversion method for TEM with FDTD method. The processes of the inversion iterations have shown the great ability of convergence with few iterations. Summarizing the process of the imaging, we can make the following conclusions. Firstly, the 2.5D imaging in FDTD system with OCCAM inversion demonstrates that we could get desired imaging results for the resistivity structure in the homogeneous half-space. Secondly, the imaging results usually do not over-depend on the initial model, but the iteration times can be reduced distinctly if the background resistivity of initial model get close to the truthful model. So it is batter to set the initial model based on the other geologic information in the application. When the background resistivity fit the truthful model well, the imaging of anomalous body only need a few iteration steps. Finally, the speed of imaging vertical boundaries is slower than the speed of imaging the horizontal boundaries.
The Question Asking Skills of Preschool Teacher Candidates: Turkey and America Example
ERIC Educational Resources Information Center
Bay, D. Neslihan
2016-01-01
Question asking is an important skill that teachers should use during class activities. Teachers need to get used to this ability while they are teacher candidates. The aim of this research is to identify the cognitive taxonomy and the structure of the questions asked by the candidate of preschool teachers and to compare the questioning skills of…
NASA Astrophysics Data System (ADS)
Maheshwari, A.; Pathak, H. A.; Mehta, B. K.; Phull, G. S.; Laad, R.; Shaikh, M. S.; George, S.; Joshi, K.; Khan, Z.
2017-04-01
ITER Vacuum Vessel is a torus-shaped, double wall structure. The space between the double walls of the VV is filled with In-Wall Shielding Blocks (IWS) and Water. The main purpose of IWS is to provide neutron shielding during ITER plasma operation and to reduce ripple of Toroidal Magnetic Field (TF). Although In-Wall Shield Blocks (IWS) will be submerged in water in between the walls of the ITER Vacuum Vessel (VV), Outgassing Rate (OGR) of IWS materials plays a significant role in leak detection of Vacuum Vessel of ITER. Thermal Outgassing Rate of a material critically depends on the Surface Roughness of material. During leak detection process using RGA equipped Leak detector and tracer gas Helium, there will be a spill over of mass 3 and mass 2 to mass 4 which creates a background reading. Helium background will have contribution of Hydrogen too. So it is necessary to ensure the low OGR of Hydrogen. To achieve an effective leak test it is required to obtain a background below 1 × 10-8 mbar 1 s-1 and hence the maximum Outgassing rate of IWS Materials should comply with the maximum Outgassing rate required for hydrogen i.e. 1 x 10-10 mbar 1 s-1 cm-2 at room temperature. As IWS Materials are special materials developed for ITER project, it is necessary to ensure the compliance of Outgassing rate with the requirement. There is a possibility of diffusing the gasses in material at the time of production. So, to validate the production process of materials as well as manufacturing of final product from this material, three coupons of each IWS material have been manufactured with the same technique which is being used in manufacturing of IWS blocks. Manufacturing records of these coupons have been approved by ITER-IO (International Organization). Outgassing rates of these coupons have been measured at room temperature and found in acceptable limit to obtain the required Helium Background. On the basis of these measurements, test reports have been generated and got approved by IO. This paper will describe the preparation, characteristics and cleaning procedure of samples, description of the system, Outgassing rate Measurement of these samples to ensure the accurate leak detection.
Chevron beam dump for ITER edge Thomson scattering system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yatsuka, E.; Hatae, T.; Bassan, M.
This paper contains the design of the beam dump for the ITER edge Thomson scattering system and mainly concerns its lifetime under the harsh thermal and electromagnetic loads as well as tight space allocation. The lifetime was estimated from the multi-pulse laser-induced damage threshold. In order to extend its lifetime, the structure of the beam dump was optimized. A number of bent sheets aligned parallel in the beam dump form a shape called a chevron which enables it to avoid the concentration of the incident laser pulse energy. The chevron beam dump is expected to withstand thermal loads due tomore » nuclear heating, radiation from the plasma, and numerous incident laser pulses throughout the entire ITER project with a reasonable margin for the peak factor of the beam profile. Structural analysis was also carried out in case of electromagnetic loads during a disruption. Moreover, detailed issues for more accurate assessments of the beam dump's lifetime are clarified. Variation of the bi-directional reflection distribution function (BRDF) due to erosion by or contamination of neutral particles derived from the plasma is one of the most critical issues that needs to be resolved. In this paper, the BRDF was assumed, and the total amount of stray light and the absorbed laser energy profile on the beam dump were evaluated.« less
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
Direct Estimation of Structure and Motion from Multiple Frames
1990-03-01
sequential frames in an image sequence. As a consequence, the information that can be extracted from a single optical flow field is limited to a snapshot of...researchers have developed techniques that extract motion and structure inform.4tion without computation of the optical flow. Best known are the "direct...operated iteratively on a sequence of images to recover structure. It required feature extraction and matching. Broida and Chellappa [9] suggested the use of
NASA Astrophysics Data System (ADS)
Bryson, Dean Edward
A model's level of fidelity may be defined as its accuracy in faithfully reproducing a quantity or behavior of interest of a real system. Increasing the fidelity of a model often goes hand in hand with increasing its cost in terms of time, money, or computing resources. The traditional aircraft design process relies upon low-fidelity models for expedience and resource savings. However, the reduced accuracy and reliability of low-fidelity tools often lead to the discovery of design defects or inadequacies late in the design process. These deficiencies result either in costly changes or the acceptance of a configuration that does not meet expectations. The unknown opportunity cost is the discovery of superior vehicles that leverage phenomena unknown to the designer and not illuminated by low-fidelity tools. Multifidelity methods attempt to blend the increased accuracy and reliability of high-fidelity models with the reduced cost of low-fidelity models. In building surrogate models, where mathematical expressions are used to cheaply approximate the behavior of costly data, low-fidelity models may be sampled extensively to resolve the underlying trend, while high-fidelity data are reserved to correct inaccuracies at key locations. Similarly, in design optimization a low-fidelity model may be queried many times in the search for new, better designs, with a high-fidelity model being exercised only once per iteration to evaluate the candidate design. In this dissertation, a new multifidelity, gradient-based optimization algorithm is proposed. It differs from the standard trust region approach in several ways, stemming from the new method maintaining an approximation of the inverse Hessian, that is the underlying curvature of the design problem. Whereas the typical trust region approach performs a full sub-optimization using the low-fidelity model at every iteration, the new technique finds a suitable descent direction and focuses the search along it, reducing the number of low-fidelity evaluations required. This narrowing of the search domain also alleviates the burden on the surrogate model corrections between the low- and high-fidelity data. Rather than requiring the surrogate to be accurate in a hyper-volume bounded by the trust region, the model needs only to be accurate along the forward-looking search direction. Maintaining the approximate inverse Hessian also allows the multifidelity algorithm to revert to high-fidelity optimization at any time. In contrast, the standard approach has no memory of the previously-computed high-fidelity data. The primary disadvantage of the proposed algorithm is that it may require modifications to the optimization software, whereas standard optimizers may be used as black-box drivers in the typical trust region method. A multifidelity, multidisciplinary simulation of aeroelastic vehicle performance is developed to demonstrate the optimization method. The numerical physics models include body-fitted Euler computational fluid dynamics; linear, panel aerodynamics; linear, finite-element computational structural mechanics; and reduced, modal structural bases. A central element of the multifidelity, multidisciplinary framework is a shared parametric, attributed geometric representation that ensures the analysis inputs are consistent between disciplines and fidelities. The attributed geometry also enables the transfer of data between disciplines. The new optimization algorithm, a standard trust region approach, and a single-fidelity quasi-Newton method are compared for a series of analytic test functions, using both polynomial chaos expansions and kriging to correct discrepancies between fidelity levels of data. In the aggregate, the new method requires fewer high-fidelity evaluations than the trust region approach in 51% of cases, and the same number of evaluations in 18%. The new approach also requires fewer low-fidelity evaluations, by up to an order of magnitude, in almost all cases. The efficacy of both multifidelity methods compared to single-fidelity optimization depends significantly on the behavior of the high-fidelity model and the quality of the low-fidelity approximation, though savings are realized in a large number of cases. The multifidelity algorithm is also compared to the single-fidelity quasi-Newton method for complex aeroelastic simulations. The vehicle design problem includes variables for planform shape, structural sizing, and cruise condition with constraints on trim and structural stresses. Considering the objective function reduction versus computational expenditure, the multifidelity process performs better in three of four cases in early iterations. However, the enforcement of a contracting trust region slows the multifidelity progress. Even so, leveraging the approximate inverse Hessian, the optimization can be seamlessly continued using high-fidelity data alone. Ultimately, the proposed new algorithm produced better designs in all four cases. Investigating the return on investment in terms of design improvement per computational hour confirms that the multifidelity advantage is greatest in early iterations, and managing the transition to high-fidelity optimization is critical.
Interdisciplinary Research: Performance and Policy Issues.
ERIC Educational Resources Information Center
Rossini, Frederick A.; Porter, Alan L.
1981-01-01
Successful interdisciplinary research performance, it is suggested, depends on such structural and process factors as leadership, team characteristics, study bounding, iteration, communication patterns, and epistemological factors. Appropriate frameworks for socially organizing the development of knowledge such as common group learning, modeling,…
Findeisen, Peter; Röckel, Matthias; Nees, Matthias; Röder, Christian; Kienle, Peter; Von Knebel Doeberitz, Magnus; Kalthoff, Holger; Neumaier, Michael
2008-11-01
The presence of tumor cells in peripheral blood is being regarded increasingly as a clinically relevant prognostic factor for colorectal cancer patients. Current molecular methods are very sensitive but due to low specificity their diagnostic value is limited. This study was undertaken in order to systematically identify and validate new colorectal cancer (CRC) marker genes for improved detection of minimal residual disease in peripheral blood mononuclear cells of colorectal cancer patients. Marker genes with upregulated gene expression in colorectal cancer tissue and cell lines were identified using microarray experiments and publicly available gene expression data. A systematic iterative approach was used to reduce a set of 346 candidate genes, reportedly associated with CRC to a selection of candidate genes that were then further validated by relative quantitative real-time RT-PCR. Analytical sensitivity of RT-PCR assays was determined by spiking experiments with CRC cells. Diagnostic sensitivity as well as specificity was tested on a control group consisting of 18 CRC patients compared to 12 individuals without malignant disease. From a total of 346-screened genes only serine (or cysteine) proteinase inhibitor, clade B (ovalbumin), member 5 (SERPINB5) showed significantly elevated transcript levels in peripheral venous blood specimens of tumor patients when compared to the nonmalignant control group. These results were confirmed by analysis of an enlarged collective consisting of 63 CRC patients and 36 control individuals without malignant disease. In conclusion SERPINB5 seems to be a promising marker for detection of circulating tumor cells in peripheral blood of colorectal cancer patients.
2009-01-01
Background Soybeans grown in the upper Midwestern United States often suffer from iron deficiency chlorosis, which results in yield loss at the end of the season. To better understand the effect of iron availability on soybean yield, we identified genes in two near isogenic lines with changes in expression patterns when plants were grown in iron sufficient and iron deficient conditions. Results Transcriptional profiles of soybean (Glycine max, L. Merr) near isogenic lines Clark (PI548553, iron efficient) and IsoClark (PI547430, iron inefficient) grown under Fe-sufficient and Fe-limited conditions were analyzed and compared using the Affymetrix® GeneChip® Soybean Genome Array. There were 835 candidate genes in the Clark (PI548553) genotype and 200 candidate genes in the IsoClark (PI547430) genotype putatively involved in soybean's iron stress response. Of these candidate genes, fifty-eight genes in the Clark genotype were identified with a genetic location within known iron efficiency QTL and 21 in the IsoClark genotype. The arrays also identified 170 single feature polymorphisms (SFPs) specific to either Clark or IsoClark. A sliding window analysis of the microarray data and the 7X genome assembly coupled with an iterative model of the data showed the candidate genes are clustered in the genome. An analysis of 5' untranslated regions in the promoter of candidate genes identified 11 conserved motifs in 248 differentially expressed genes, all from the Clark genotype, representing 129 clusters identified earlier, confirming the cluster analysis results. Conclusion These analyses have identified the first genes with expression patterns that are affected by iron stress and are located within QTL specific to iron deficiency stress. The genetic location and promoter motif analysis results support the hypothesis that the differentially expressed genes are co-regulated. The combined results of all analyses lead us to postulate iron inefficiency in soybean is a result of a mutation in a transcription factor(s), which controls the expression of genes required in inducing an iron stress response. PMID:19678937
An Algorithm for the Mixed Transportation Network Design Problem
Liu, Xinyu; Chen, Qun
2016-01-01
This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.