Multilevel Iterative Methods in Nonlinear Computational Plasma Physics
NASA Astrophysics Data System (ADS)
Knoll, D. A.; Finn, J. M.
1997-11-01
Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)
A novel acenocoumarol pharmacogenomic dosing algorithm for the Greek population of EU-PACT trial.
Ragia, Georgia; Kolovou, Vana; Kolovou, Genovefa; Konstantinides, Stavros; Maltezos, Efstratios; Tavridou, Anna; Tziakas, Dimitrios; Maitland-van der Zee, Anke H; Manolopoulos, Vangelis G
2017-01-01
To generate and validate a pharmacogenomic-guided (PG) dosing algorithm for acenocoumarol in the Greek population. To compare its performance with other PG algorithms developed for the Greek population. A total of 140 Greek patients participants of the EU-PACT trial for acenocoumarol, a randomized clinical trial that prospectively compared the effect of a PG dosing algorithm with a clinical dosing algorithm on the percentage of time within INR therapeutic range, who reached acenocoumarol stable dose were included in the study. CYP2C9 and VKORC1 genotypes, age and weight affected acenocoumarol dose and predicted 53.9% of its variability. EU-PACT PG algorithm overestimated acenocoumarol dose across all different CYP2C9/VKORC1 functional phenotype bins (predicted dose vs stable dose in normal responders 2.31 vs 2.00 mg/day, p = 0.028, in sensitive responders 1.72 vs 1.50 mg/day, p = 0.003, in highly sensitive responders 1.39 vs 1.00 mg/day, p = 0.029). The PG algorithm previously developed for the Greek population overestimated the dose in normal responders (2.51 vs 2.00 mg/day, p < 0.001). Ethnic-specific dosing algorithm is suggested for better prediction of acenocoumarol dosage requirements in patients of Greek origin.
PSF reconstruction for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Jan, Meei-Ling; Lee, Ming-Wei; Huang, Hsuan-Ming
2018-02-01
Compton-based prompt gamma (PG) imaging has been proposed for in vivo range verification in proton therapy. However, several factors degrade the image quality of PG images, some of which are due to inherent properties of a Compton camera such as spatial resolution and energy resolution. Moreover, Compton-based PG imaging has a spatially variant resolution loss. In this study, we investigate the performance of the list-mode ordered subset expectation maximization algorithm with a shift-variant point spread function (LM-OSEM-SV-PSF) model. We also evaluate how well the PG images reconstructed using an SV-PSF model reproduce the distal falloff of the proton beam. The SV-PSF parameters were estimated from simulation data of point sources at various positions. Simulated PGs were produced in a water phantom irradiated with a proton beam. Compared to the LM-OSEM algorithm, the LM-OSEM-SV-PSF algorithm improved the quality of the reconstructed PG images and the estimation of PG falloff positions. In addition, the 4.44 and 5.25 MeV PG emissions can be accurately reconstructed using the LM-OSEM-SV-PSF algorithm. However, for the 2.31 and 6.13 MeV PG emissions, the LM-OSEM-SV-PSF reconstruction provides limited improvement. We also found that the LM-OSEM algorithm followed by a shift-variant Richardson-Lucy deconvolution could reconstruct images with quality visually similar to the LM-OSEM-SV-PSF-reconstructed images, while requiring shorter computation time.
On the structure of an aqueous propylene glycol solution.
Rhys, Natasha H; Gillams, Richard J; Collins, Louise E; Callear, Samantha K; Lawrence, M Jayne; McLain, Sylvia E
2016-12-14
Using a combination of neutron diffraction and empirical potential structure refinement computational modelling, the interactions in a 30 mol. % aqueous solution of propylene glycol (PG), which govern both the hydration and association of this molecule in solution, have been assessed. From this work it appears that PG is readily hydrated, where the most prevalent hydration interactions were found to be through both the PG hydroxyl groups but also alkyl groups typically considered hydrophobic. Hydration interactions of PG dominate the solution over PG self-self interactions and there is no evidence of more extensive association. This hydration behavior for PG in solutions suggests that the preference of PG to be hydrated rather than to be self-associated may translate into a preference for PG to bind to lipids rather than itself, providing a potential explanation for how PG is able to enhance the apparent solubility of drug molecules in vivo.
On the structure of an aqueous propylene glycol solution
NASA Astrophysics Data System (ADS)
Rhys, Natasha H.; Gillams, Richard J.; Collins, Louise E.; Callear, Samantha K.; Lawrence, M. Jayne; McLain, Sylvia E.
2016-12-01
Using a combination of neutron diffraction and empirical potential structure refinement computational modelling, the interactions in a 30 mol. % aqueous solution of propylene glycol (PG), which govern both the hydration and association of this molecule in solution, have been assessed. From this work it appears that PG is readily hydrated, where the most prevalent hydration interactions were found to be through both the PG hydroxyl groups but also alkyl groups typically considered hydrophobic. Hydration interactions of PG dominate the solution over PG self-self interactions and there is no evidence of more extensive association. This hydration behavior for PG in solutions suggests that the preference of PG to be hydrated rather than to be self-associated may translate into a preference for PG to bind to lipids rather than itself, providing a potential explanation for how PG is able to enhance the apparent solubility of drug molecules in vivo.
Effect of biogenic fermentation impurities on lactic acid hydrogenation to propylene glycol.
Zhang, Zhigang; Jackson, James E; Miller, Dennis J
2008-09-01
The effect of residual impurities from glucose fermentation to lactic acid (LA) on subsequent ruthenium-catalyzed hydrogenation of LA to propylene glycol (PG) is examined. Whereas refined LA feed exhibits stable conversion to PG over carbon-supported ruthenium catalyst in a trickle bed reactor, partially refined LA from fermentation shows a steep decline in PG production over short (<40 h) reaction times followed by a further slow decay in performance. Addition of model impurities to refined LA has varying effects: organic acids, sugars, or inorganic salts have little effect on conversion; alanine, a model amino acid, results in a strong but reversible decline in conversion via competitive adsorption between alanine and LA on the Ru surface. The sulfur-containing amino acids cysteine and methionine irreversibly poison the catalyst for LA conversion. Addition of 0.1 wt% albumin as a model protein leads to slow decline in rate, consistent with pore plugging or combined pore plugging and poisoning of the Ru surface. This study points to the need for integrated design and operation of biological processes and chemical processes in the biorefinery in order to make efficient conversion schemes viable.
NASA Astrophysics Data System (ADS)
Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.
2018-03-01
This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
On the solvation of the phosphocholine headgroup in an aqueous propylene glycol solution
NASA Astrophysics Data System (ADS)
Rhys, Natasha H.; Al-Badri, Mohamed Ali; Ziolek, Robert M.; Gillams, Richard J.; Collins, Louise E.; Lawrence, M. Jayne; Lorenz, Christian D.; McLain, Sylvia E.
2018-04-01
The atomic-scale structure of the phosphocholine (PC) headgroup in 30 mol. % propylene glycol (PG) in an aqueous solution has been investigated using a combination of neutron diffraction with isotopic substitution experiments and computer simulation techniques—molecular dynamics and empirical potential structure refinement. Here, the hydration of the PC headgroup remains largely intact compared with the hydration of this group in a bilayer and in a bulk water solution, with the PG molecules showing limited interactions with the headgroup. When direct PG interactions with PC do occur, they are most likely to coordinate to the 3+N (CH 3 ) motifs. Further, PG does not affect the bulk water structure and the addition of PC does not perturb the PG-solvent interactions. This suggests that the reason why PG is able to penetrate into membranes easily is that it does not form strong-hydrogen bonding or electrostatic interactions with the headgroup allowing it to easily move across the membrane barrier.
An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.
Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe
2014-03-01
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
A low-count reconstruction algorithm for Compton-based prompt gamma imaging
NASA Astrophysics Data System (ADS)
Huang, Hsuan-Ming; Liu, Chih-Chieh; Jan, Meei-Ling; Lee, Ming-Wei
2018-04-01
The Compton camera is an imaging device which has been proposed to detect prompt gammas (PGs) produced by proton–nuclear interactions within tissue during proton beam irradiation. Compton-based PG imaging has been developed to verify proton ranges because PG rays, particularly characteristic ones, have strong correlations with the distribution of the proton dose. However, accurate image reconstruction from characteristic PGs is challenging because the detector efficiency and resolution are generally low. Our previous study showed that point spread functions can be incorporated into the reconstruction process to improve image resolution. In this study, we proposed a low-count reconstruction algorithm to improve the image quality of a characteristic PG emission by pooling information from other characteristic PG emissions. PGs were simulated from a proton beam irradiated on a water phantom, and a two-stage Compton camera was used for PG detection. The results show that the image quality of the reconstructed characteristic PG emission is improved with our proposed method in contrast to the standard reconstruction method using events from only one characteristic PG emission. For the 4.44 MeV PG rays, both methods can be used to predict the positions of the peak and the distal falloff with a mean accuracy of 2 mm. Moreover, only the proposed method can improve the estimated positions of the peak and the distal falloff of 5.25 MeV PG rays, and a mean accuracy of 2 mm can be reached.
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
GreedyMAX-type Algorithms for the Maximum Independent Set Problem
NASA Astrophysics Data System (ADS)
Borowiecki, Piotr; Göring, Frank
A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.
Syn, Nicholas L X; Lee, Soo-Chin; Brunham, Liam R; Goh, Boon-Cher
2015-10-01
Clinical trials of genotype-guided dosing of warfarin have yielded mixed results, which may in part reflect ethnic differences among study participants. However, no previous study has compared genotype-guided versus clinically guided or standard-of-care dosing in a Chinese population, whereas those involving African-Americans were underpowered to detect significant differences. We present a preclinical strategy that integrates pharmacogenetics (PG) and pharmacometrics to predict the outcome or guide the design of dosing strategies for drugs that show large interindividual variability. We use the example of warfarin and focus on two underrepresented groups in warfarin research. We identified the parameters required to simulate a patient population and the outcome of dosing strategies. PG and pharmacogenetic plus loading (PG+L) algorithms that take into account a patient's VKORC1 and CYP2C9 genotype status were considered and compared against a clinical (CA) algorithm for a simulated Chinese population using a predictive Monte Carlo and pharmacokinetic-pharmacodynamic framework. We also examined a simulated population of African-American ancestry to assess the robustness of the model in relation to real-world clinical trial data. The simulations replicated similar trends observed with clinical data in African-Americans. They further predict that the PG+L regimen is superior to both the CA and the PG regimen in maximizing percentage time in therapeutic range in a Chinese cohort, whereas the CA regimen poses the highest risk of overanticoagulation during warfarin initiation. The findings supplement the literature with an unbiased comparison of warfarin dosing algorithms and highlights interethnic differences in anticoagulation control.
Brouwers, Melissa C; Makarski, Julie; Kastner, Monika; Hayden, Leigh; Bhattacharyya, Onil
2015-03-15
Practice guideline (PG) implementability refers to PG features that promote their use. While there are tools and resources to promote PG implementability, none are based on an evidence-informed and multidisciplinary perspective. Our objectives were to (i) create a comprehensive and evidence-informed model of PG implementability, (ii) seek support for the model from the international PG community, (iii) map existing implementability tools on to the model, (iv) prioritize areas for further investigation, and (v) describe how the model can be used by PG developers, users, and researchers. A mixed methods approach was used. Using our completed realist review of the literature of seven different disciplines as the foundation, an iterative consensus process was used to create the beta version of the model. This was followed by (i) a survey of international stakeholders (guideline developers and users) to gather feedback and to refine the model, (ii) a content analysis comparing the model to existing PG tools, and (iii) a strategy to prioritize areas of the model for further research by members of the research team. The Guideline Implementability for Decision Excellence Model (GUIDE-M) is comprised of 3 core tactics, 7 domains, 9 subdomains, 44 attributes, and 40 subattributes and elements. Feedback on the beta version was received from 248 stakeholders from 34 countries. The model was rated as logical, relevant, and appropriate. Seven PG tools were selected and compared to the GUIDE-M: very few tools targeted the Contextualization and Deliberations domain. Also, fewer of the tools addressed PG appraisal than PG development and reporting functions. These findings informed the research priorities identified by the team. The GUIDE-M provides an evidence-informed international and multidisciplinary conceptualization of PG implementability. The model can be used by PG developers to help them create more implementable recommendations, by clinicians and other users to help them be better consumers of PGs, and by the research community to identify priorities for further investigation.
Lundquist, Peter K.; Poliakov, Anton; Bhuiyan, Nazmul H.; Zybailov, Boris; Sun, Qi; van Wijk, Klaas J.
2012-01-01
Plastoglobules (PGs) in chloroplasts are thylakoid-associated monolayer lipoprotein particles containing prenyl and neutral lipids and several dozen proteins mostly with unknown functions. An integrated view of the role of the PG is lacking. Here, we better define the PG proteome and provide a conceptual framework for further studies. The PG proteome from Arabidopsis (Arabidopsis thaliana) leaf chloroplasts was determined by mass spectrometry of isolated PGs and quantitative comparison with the proteomes of unfractionated leaves, thylakoids, and stroma. Scanning electron microscopy showed the purity and size distribution of the isolated PGs. Compared with previous PG proteome analyses, we excluded several proteins and identified six new PG proteins, including an M48 metallopeptidase and two Absence of bc1 complex (ABC1) atypical kinases, confirmed by immunoblotting. This refined PG proteome consisted of 30 proteins, including six ABC1 kinases and seven fibrillins together comprising more than 70% of the PG protein mass. Other fibrillins were located predominantly in the stroma or thylakoid and not in PGs; we discovered that this partitioning can be predicted by their isoelectric point and hydrophobicity. A genome-wide coexpression network for the PG genes was then constructed from mRNA expression data. This revealed a modular network with four distinct modules that each contained at least one ABC1K and/or fibrillin gene. Each module showed clear enrichment in specific functions, including chlorophyll degradation/senescence, isoprenoid biosynthesis, plastid proteolysis, and redox regulators and phosphoregulators of electron flow. We propose a new testable model for the PGs, in which sets of genes are associated with specific PG functions. PMID:22274653
Yu, Ke-Da; Jiang, Yi-Zhou; Hao, Shuang; Shao, Zhi-Ming
2015-10-05
The clinical significance of progesterone receptor (PgR) expression in estrogen receptor-negative (ER-) breast cancer is controversial. Herein, we systemically investigate the clinicopathologic features, molecular essence, and endocrine responsiveness of ER-/PgR+/HER2- phenotype. Four study cohorts were included. The first and second cohorts were from the Surveillance, Epidemiology, and End Results database (n = 67,932) and Fudan University Shanghai Cancer Center (n = 2,338), respectively, for clinicopathologic and survival analysis. The third and fourth cohorts were from two independent publicly available microarray datasets including 837 operable cases and 483 cases undergoing neoadjuvant chemotherapy, respectively, for clinicopathologic and gene-expression analysis. Characterized genes defining subgroups within the ER-/PgR+/HER2- phenotype were determined and further validated. Clinicopathologic features and survival outcomes of the ER-/PgR+ phenotype fell in between the ER+/PgR+ and ER-/PgR- phenotypes, but were more similar to ER-/PgR-. Among the ER-/PgR+ phenotype, 30% (95% confidence interval [CI] 17-42%, pooled by a fixed-effects method) were luminal-like and 59% (95% CI 45-72%, pooled by a fixed-effects method) were basal-like. We further refined the characterized genes for subtypes within the ER-/PgR+ phenotype and developed an immunohistochemistry-based method that could determine the molecular essence of ER-/PgR+ using three markers, TFF1, CK5, and EGFR. Either PAM50-defined or immunohistochemistry-defined basal-like ER-/PgR+ cases have a lower endocrine therapy sensitivity score compared with luminal-like ER-/PgR+ cases (P <0.0001 by Mann-Whitney test for each study set and P <0.0001 for pooled standardized mean difference in meta-analysis). Immunohistochemistry-defined basal-like ER-/PgR+ cases might not benefit from adjuvant endocrine therapy (log-rank P = 0.61 for sufficient versus insufficient endocrine therapy). The majority of ER-/PgR+/HER2- phenotype breast cancers are basal-like and associated with a lower endocrine therapy sensitivity score. Additional studies are needed to validate these findings.
Enhanced ionization efficiency in TIMS analyses of plutonium and americium using porous ion emitters
Baruzzini, Matthew L.; Hall, Howard L.; Watrous, Matthew G.; ...
2016-12-05
Investigations of enhanced sample utilization in thermal ionization mass spectrometry (TIMS) using porous ion emitter (PIE) techniques for the analyses of trace quantities of americium and plutonium were performed. Repeat ionization efficiency (i.e., the ratio of ions detected to atoms loaded on the filament) measurements were conducted on sample sizes ranging from 10–100 pg for americium and 1–100 pg for plutonium using PIE and traditional (i.e., a single, zone-refined rhenium, flat filament ribbon with a carbon ionization enhancer) TIMS filament sources. When compared to traditional filaments, PIEs exhibited an average boost in ionization efficiency of ~550% for plutonium and ~1100%more » for americium. A maximum average efficiency of 1.09% was observed at a 1 pg plutonium sample loading using PIEs. Supplementary trials were conducted using newly developed platinum PIEs to analyze 10 pg mass loadings of plutonium. As a result, platinum PIEs exhibited an additional ~134% boost in ion yield over standard PIEs and ~736% over traditional filaments at the same sample loading level.« less
Adaptive mesh refinement and front-tracking for shear bands in an antiplane shear model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garaizar, F.X.; Trangenstein, J.
1998-09-01
In this paper the authors describe a numerical algorithm for the study of hear-band formation and growth in a two-dimensional antiplane shear of granular materials. The algorithm combines front-tracking techniques and adaptive mesh refinement. Tracking provides a more careful evolution of the band when coupled with special techniques to advance the ends of the shear band in the presence of a loss of hyperbolicity. The adaptive mesh refinement allows the computational effort to be concentrated in important areas of the deformation, such as the shear band and the elastic relief wave. The main challenges are the problems related to shearmore » bands that extend across several grid patches and the effects that a nonhyperbolic growth rate of the shear bands has in the refinement process. They give examples of the success of the algorithm for various levels of refinement.« less
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.
2017-09-01
Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
Kelly, W Robert; Long, Stephen E; Mann, Jacqueline L
2003-07-01
Mercury was determined by isotope dilution cold-vapor inductively coupled plasma mass spectrometry (ID-CV-ICP-MS) in four different liquid petroleum SRMs. Samples of approximately 0.3 g were spiked with stable (201)Hg and wet ashed in a closed system (Carius tube) using 6 g of high-purity nitric acid. Three different types of commercial oils were measured: two Texas crude oils, SRM 2721 (41.7+/-5.7 pg g(-1)) and SRM 2722 (129+/-13 pg g(-1)), a low-sulfur diesel fuel, SRM 2724b (34+/-26 pg g(-1)), and a low-sulfur residual fuel oil, SRM 1619b (3.5+/-0.74 ng g(-1)) (mean value and 95% CI). The Hg values for the crude oils and the diesel fuel are the lowest values ever reported for these matrices. The method detection limit, which is ultimately limited by method blank uncertainty, is approximately 10 pg g(-1) for a 0.3 g sample. Accurate Hg measurements in petroleum products are needed to assess the contribution to the global Hg cycle and may be needed in the near future to comply with reporting regulations for toxic elements.
AbouEzzeddine, Omar F.; French, Benjamin; Mirzoyev, Sultan A.; Jaffe, Allan S; Levy, Wayne C.; Fang, James C.; Sweitzer, Nancy K.; Cappola, Thomas P.; Redfield, Margaret M.
2016-01-01
Background Heart failure (HF) guidelines recommend brain natriuretic peptide (BNP) and multivariable risk-scores such as the Seattle HF Model (SHFM) to predict risk in HF with reduced ejection fraction (HFrEF). A practical way to integrate information from these two prognostic tools is lacking. We sought to establish a SHFM+BNP risk-stratification algorithm. Methods The retrospective derivation cohort included consecutive patients with HFrEF at Mayo. One-year outcome (death, transplantation or ventricular assist device) was assessed. The SHFM+BNP algorithm was derived by stratifying patients within SHFM-predicted risk categories (≤2.5%, 2.6–≤10%, >10%) according to BNP above or below 700 pg/mL and comparing SHFM-predicted and observed event rates within each SHFM+BNP category. The algorithm was validated in a prospective, multicenter HFrEF registry (Penn HF Study). Results Derivation (n=441; one-year event rate 17%) and validation (n=1513; one-year event rate 12%) cohorts differed with the former being older and more likely ischemic with worse symptoms, lower EF, worse renal function, higher BNP and SHFM scores. In both cohorts, across the three SHFM-predicted risk strata, a BNP>700 pg/ml consistently identified patients with approximately three-fold the risk that the SHFM would have otherwise estimated regardless stage of HF, intensity and duration of HF-therapy, and comorbidities. Conversely, the SHFM was appropriately calibrated in patients with a BNP<700 pg/ml. Conclusion The simple SHFM+BNP algorithm displays stable performance across diverse HFrEF cohorts and may enhance risk stratification to enable appropriate decisions regarding HF therapeutic or palliative strategies. PMID:27021278
Using Small-Step Refinement for Algorithm Verification in Computer Science Education
ERIC Educational Resources Information Center
Simic, Danijela
2015-01-01
Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyse similar…
Field, Daniel J; Bercovici, Antoine; Berv, Jacob S; Dunn, Regan; Fastovsky, David E; Lyson, Tyler R; Vajda, Vivi; Gauthier, Jacques A
2018-06-04
The fossil record and recent molecular phylogenies support an extraordinary early-Cenozoic radiation of crown birds (Neornithes) after the Cretaceous-Paleogene (K-Pg) mass extinction [1-3]. However, questions remain regarding the mechanisms underlying the survival of the deepest lineages within crown birds across the K-Pg boundary, particularly since this global catastrophe eliminated even the closest stem-group relatives of Neornithes [4]. Here, ancestral state reconstructions of neornithine ecology reveal a strong bias toward taxa exhibiting predominantly non-arboreal lifestyles across the K-Pg, with multiple convergent transitions toward predominantly arboreal ecologies later in the Paleocene and Eocene. By contrast, ecomorphological inferences indicate predominantly arboreal lifestyles among enantiornithines, the most diverse and widespread Mesozoic avialans [5-7]. Global paleobotanical and palynological data show that the K-Pg Chicxulub impact triggered widespread destruction of forests [8, 9]. We suggest that ecological filtering due to the temporary loss of significant plant cover across the K-Pg boundary selected against any flying dinosaurs (Avialae [10]) committed to arboreal ecologies, resulting in a predominantly non-arboreal post-extinction neornithine avifauna composed of total-clade Palaeognathae, Galloanserae, and terrestrial total-clade Neoaves that rapidly diversified into the broad range of avian ecologies familiar today. The explanation proposed here provides a unifying hypothesis for the K-Pg-associated mass extinction of arboreal stem birds, as well as for the post-K-Pg radiation of arboreal crown birds. It also provides a baseline hypothesis to be further refined pending the discovery of additional neornithine fossils from the Latest Cretaceous and earliest Paleogene. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polf, J; McCleskey, M; Brown, S
2014-06-01
Purpose: Recent studies have suggested that the characteristics of prompt gammas (PG) emitted during proton beam irradiation are advantageous for determining beam range during treatment delivery. The purpose of this work was to determine the feasibility of determining the proton beam range from PG data measured with a prototype Compton camera (CC) during proton beam irradiation. Methods: Using a prototype multi-stage CC the PG emission from a water phantom was measured during irradiation with clinical proton therapy beams. The measured PG emission data was used to reconstruct an image of the PG emission using a backprojection reconstruction algorithm. One dimensionalmore » (1D) profiles extracted from the PG images were compared to: 1) PG emission data measured at fixed depths using collimated high purity Germanium and Lanthanum Bromide detectors, and 2) the measured depth dose profiles of the proton beams. Results: Comparisons showed that the PG emission profiles reconstructed from CC measurements agreed very well with the measurements of PG emission as a function of depth made with the collimated detectors. The distal falloff of the measured PG profile was between 1 mm to 4 mm proximal to the distal edge of the Bragg peak for proton beam ranges from 4 cm to 16 cm in water. Doses of at least 5 Gy were needed for the CC to measure sufficient data to image the PG profile and localize the distal PG falloff. Conclusion: Initial tests of a prototype CC for imaging PG emission during proton beam irradiation indicated that measurement and reconstruction of the PG profile was possible. However, due to limitations of the operational parameters (energy range and count rate) of the current CC prototype, doses of greater than a typical treatment dose (∼2 Gy) were needed to measure adequate PG signal to reconstruct viable images. Funding support for this project provided by a grant from DoD.« less
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
QuadBase2: web server for multiplexed guanine quadruplex mining and visualization
Dhapola, Parashar; Chowdhury, Shantanu
2016-01-01
DNA guanine quadruplexes or G4s are non-canonical DNA secondary structures which affect genomic processes like replication, transcription and recombination. G4s are computationally identified by specific nucleotide motifs which are also called putative G4 (PG4) motifs. Despite the general relevance of these structures, there is currently no tool available that can allow batch queries and genome-wide analysis of these motifs in a user-friendly interface. QuadBase2 (quadbase.igib.res.in) presents a completely reinvented web server version of previously published QuadBase database. QuadBase2 enables users to mine PG4 motifs in up to 178 eukaryotes through the EuQuad module. This module interfaces with Ensembl Compara database, to allow users mine PG4 motifs in the orthologues of genes of interest across eukaryotes. PG4 motifs can be mined across genes and their promoter sequences in 1719 prokaryotes through ProQuad module. This module includes a feature that allows genome-wide mining of PG4 motifs and their visualization as circular histograms. TetraplexFinder, the module for mining PG4 motifs in user-provided sequences is now capable of handling up to 20 MB of data. QuadBase2 is a comprehensive PG4 motif mining tool that further expands the configurations and algorithms for mining PG4 motifs in a user-friendly way. PMID:27185890
GRID: a high-resolution protein structure refinement algorithm.
Chitsaz, Mohsen; Mayo, Stephen L
2013-03-05
The energy-based refinement of protein structures generated by fold prediction algorithms to atomic-level accuracy remains a major challenge in structural biology. Energy-based refinement is mainly dependent on two components: (1) sufficiently accurate force fields, and (2) efficient conformational space search algorithms. Focusing on the latter, we developed a high-resolution refinement algorithm called GRID. It takes a three-dimensional protein structure as input and, using an all-atom force field, attempts to improve the energy of the structure by systematically perturbing backbone dihedrals and side-chain rotamer conformations. We compare GRID to Backrub, a stochastic algorithm that has been shown to predict a significant fraction of the conformational changes that occur with point mutations. We applied GRID and Backrub to 10 high-resolution (≤ 2.8 Å) crystal structures from the Protein Data Bank and measured the energy improvements obtained and the computation times required to achieve them. GRID resulted in energy improvements that were significantly better than those attained by Backrub while expending about the same amount of computational resources. GRID resulted in relaxed structures that had slightly higher backbone RMSDs compared to Backrub relative to the starting crystal structures. The average RMSD was 0.25 ± 0.02 Å for GRID versus 0.14 ± 0.04 Å for Backrub. These relatively minor deviations indicate that both algorithms generate structures that retain their original topologies, as expected given the nature of the algorithms. Copyright © 2012 Wiley Periodicals, Inc.
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics.
Duconge, Jorge; Ramos, Alga S; Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y; Cadilla, Carmen L; Cruz, Iadelisse; Feliu, Juan F; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Results supported our rationale to incorporate individual's genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. ClinicalTrials.gov NCT01318057.
Adaptive mesh refinement for characteristic grids
NASA Astrophysics Data System (ADS)
Thornburg, Jonathan
2011-05-01
I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Algorithm refinement for stochastic partial differential equations: II. Correlated systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, Francis J.; Garcia, Alejandro L.; Tartakovsky, Daniel M.
2005-08-10
We analyze a hybrid particle/continuum algorithm for a hydrodynamic system with long ranged correlations. Specifically, we consider the so-called train model for viscous transport in gases, which is based on a generalization of the random walk process for the diffusion of momentum. This discrete model is coupled with its continuous counterpart, given by a pair of stochastic partial differential equations. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass and momentum conservation. This methodology is an extension of our stochastic Algorithm Refinement (AR) hybrid for simple diffusion [F. Alexander, A. Garcia,more » D. Tartakovsky, Algorithm refinement for stochastic partial differential equations: I. Linear diffusion, J. Comput. Phys. 182 (2002) 47-66]. Results from a variety of numerical experiments are presented for steady-state scenarios. In all cases the mean and variance of density and velocity are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the long-range correlations of velocity fluctuations are qualitatively preserved but at reduced magnitude.« less
H-PoP and H-PoPG: heuristic partitioning algorithms for single individual haplotyping of polyploids.
Xie, Minzhu; Wu, Qiong; Wang, Jianxin; Jiang, Tao
2016-12-15
Some economically important plants including wheat and cotton have more than two copies of each chromosome. With the decreasing cost and increasing read length of next-generation sequencing technologies, reconstructing the multiple haplotypes of a polyploid genome from its sequence reads becomes practical. However, the computational challenge in polyploid haplotyping is much greater than that in diploid haplotyping, and there are few related methods. This article models the polyploid haplotyping problem as an optimal poly-partition problem of the reads, called the Polyploid Balanced Optimal Partition model. For the reads sequenced from a k-ploid genome, the model tries to divide the reads into k groups such that the difference between the reads of the same group is minimized while the difference between the reads of different groups is maximized. When the genotype information is available, the model is extended to the Polyploid Balanced Optimal Partition with Genotype constraint problem. These models are all NP-hard. We propose two heuristic algorithms, H-PoP and H-PoPG, based on dynamic programming and a strategy of limiting the number of intermediate solutions at each iteration, to solve the two models, respectively. Extensive experimental results on simulated and real data show that our algorithms can solve the models effectively, and are much faster and more accurate than the recent state-of-the-art polyploid haplotyping algorithms. The experiments also show that our algorithms can deal with long reads and deep read coverage effectively and accurately. Furthermore, H-PoP might be applied to help determine the ploidy of an organism. https://github.com/MinzhuXie/H-PoPG CONTACT: xieminzhu@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
A Novel Admixture-Based Pharmacogenetic Approach to Refine Warfarin Dosing in Caribbean Hispanics
Claudio-Campos, Karla; Rivera-Miranda, Giselle; Bermúdez-Bosch, Luis; Renta, Jessicca Y.; Cadilla, Carmen L.; Cruz, Iadelisse; Feliu, Juan F.; Vergara, Cunegundo; Ruaño, Gualberto
2016-01-01
Aim This study is aimed at developing a novel admixture-adjusted pharmacogenomic approach to individually refine warfarin dosing in Caribbean Hispanic patients. Patients & Methods A multiple linear regression analysis of effective warfarin doses versus relevant genotypes, admixture, clinical and demographic factors was performed in 255 patients and further validated externally in another cohort of 55 individuals. Results The admixture-adjusted, genotype-guided warfarin dosing refinement algorithm developed in Caribbean Hispanics showed better predictability (R2 = 0.70, MAE = 0.72mg/day) than a clinical algorithm that excluded genotypes and admixture (R2 = 0.60, MAE = 0.99mg/day), and outperformed two prior pharmacogenetic algorithms in predicting effective dose in this population. For patients at the highest risk of adverse events, 45.5% of the dose predictions using the developed pharmacogenetic model resulted in ideal dose as compared with only 29% when using the clinical non-genetic algorithm (p<0.001). The admixture-driven pharmacogenetic algorithm predicted 58% of warfarin dose variance when externally validated in 55 individuals from an independent validation cohort (MAE = 0.89 mg/day, 24% mean bias). Conclusions Results supported our rationale to incorporate individual’s genotypes and unique admixture metrics into pharmacogenetic refinement models in order to increase predictability when expanding them to admixed populations like Caribbean Hispanics. Trial Registration ClinicalTrials.gov NCT01318057 PMID:26745506
Refinements in the Combined Adjustment of Satellite Altimetry and Gravity Anomaly Data
1977-07-12
observations. - 151 UM Uj&liiäUäBä&immeä*,*^^ ^«^V^.^.v.rf ffM ’* ^.,/-=:jfcfe^te:^*ä*di 9.2 Spherical Harmonic Resolution The number of spherical harmonic...depend on the point mass parameters, 185 —--■ ^* p^^!?8!^ Bpp ^pg(p|SP!|p|g| we can write dr 1 dN 1 and use (9.44a). The presence of the state
Anderson, Kim A.; Szelewski, Michael J.; Wilson, Glenn; Quimby, Bruce D.; Hoffman, Peter D.
2015-01-01
We describe modified gas chromatography electron-impact/triple-quadrupole mass spectrometry (GC–EI/MS/MS) utilizing a newly developed hydrogen-injected self-cleaning ion source and modified 9 mm extractor lens. This instrument, with optimized parameters, achieves quantitative separation of 62 polycyclic aromatic hydrocarbons (PAHs). Existing methods historically limited rigorous identification and quantification to a small subset, such as the 16 PAHs the US EPA has defined as priority pollutants. Without the critical source and extractor lens modifications, the off-the-shelf GC–EI/MS/MS system was unsuitable for complex PAH analysis. Separations were enhanced by increased gas flow, a complex GC temperature profile incorporating multiple isothermal periods, specific ramp rates, and a PAH-optimized column. Typical determinations with our refined GC–EI/MS/MS have a large linear range of 1–10,000 pg μl−1 and detection limits of <2 pg μl−1. Included in the 62 PAHs, multiple-reaction-monitoring (MRM) mode enabled GC-EI/MS/MS identification and quantitation of several constituents of the MW 302 PAHs isomers. Using calibration standards, values determined were within 5% of true values over many months. Standard curve r2 values were typically >0.998, exceptional for compounds which are archetypally difficult. With this method benzo[a]fluorene, benzo[b]fluorene, benzo[c]fluorene were fully separated as was benzo[b]fluoranthene, benzo[k]fluoranthene, and benzo[j]fluoranthene. Chrysene and triphenylene, were sufficiently separated to allow accurate quantitation. Mean limits of detection (LODs) across all PAHs were 1.02 ± 0.84 pg μl−1 with indeno[1,2,3-c,d] pyrene having the lowest LOD at 0.26 pg μl−1 and only two analytes above 2.0 pg μl−1; acenaphthalene (2.33 pg μl−1) and dibenzo[a,e]pyrene (6.44 pg μl−1). PMID:26454790
Terwilliger, Thomas C; Grosse-Kunstleve, Ralf W; Afonine, Pavel V; Moriarty, Nigel W; Zwart, Peter H; Hung, Li Wei; Read, Randy J; Adams, Paul D
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 A, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution.
Refined Genetic Algorithms for Polypeptide Structure Prediction.
1996-12-01
16 I I I. Algorithm Analysis, Design , and Implemen tation : : : : : : : : : : : : : : : : : : : : : : : : : 18 3.1 Analysis...21 3.2 Algorithm Design and Implemen tation : : : : : : : : : : : : : : : : : : : : : : : : : 22 3.2.1...26 IV. Exp erimen t Design
Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.
Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji
2010-02-01
Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.
Guo, Weian; Si, Chengyong; Xue, Yu; Mao, Yanfen; Wang, Lei; Wu, Qidi
2017-05-04
Particle Swarm Optimization (PSO) is a popular algorithm which is widely investigated and well implemented in many areas. However, the canonical PSO does not perform well in population diversity maintenance so that usually leads to a premature convergence or local optima. To address this issue, we propose a variant of PSO named Grouping PSO with Personal- Best-Position (Pbest) Guidance (GPSO-PG) which maintains the population diversity by preserving the diversity of exemplars. On one hand, we adopt uniform random allocation strategy to assign particles into different groups and in each group the losers will learn from the winner. On the other hand, we employ personal historical best position of each particle in social learning rather than the current global best particle. In this way, the exemplars diversity increases and the effect from the global best particle is eliminated. We test the proposed algorithm to the benchmarks in CEC 2008 and CEC 2010, which concern the large scale optimization problems (LSOPs). By comparing several current peer algorithms, GPSO-PG exhibits a competitive performance to maintain population diversity and obtains a satisfactory performance to the problems.
Salehpour, Mehdi; Behrad, Alireza
2017-10-01
This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.
Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.
NASA Astrophysics Data System (ADS)
Liu, Y.; Li, Y.
2016-12-01
We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.
A multi-block adaptive solving technique based on lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao
2018-05-01
In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
A template-based approach for parallel hexahedral two-refinement
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
2016-10-17
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
A template-based approach for parallel hexahedral two-refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.
Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations
Mitchell, William F.
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355
The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.
Mitchell, William F
1998-01-01
Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.
Three-dimensional unstructured grid refinement and optimization using edge-swapping
NASA Technical Reports Server (NTRS)
Gandhi, Amar; Barth, Timothy
1993-01-01
This paper presents a three-dimensional (3-D) 'edge-swapping method based on local transformations. This method extends Lawson's edge-swapping algorithm into 3-D. The 3-D edge-swapping algorithm is employed for the purpose of refining and optimizing unstructured meshes according to arbitrary mesh-quality measures. Several criteria including Delaunay triangulations are examined. Extensions from two to three dimensions of several known properties of Delaunay triangulations are also discussed.
Passive microwave algorithm development and evaluation
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1995-01-01
The scientific objectives of this grant are: (1) thoroughly evaluate, both theoretically and empirically, all available Special Sensor Microwave Imager (SSM/I) retrieval algorithms for column water vapor, column liquid water, and surface wind speed; (2) where both appropriate and feasible, develop, validate, and document satellite passive microwave retrieval algorithms that offer significantly improved performance compared with currently available algorithms; and (3) refine and validate a novel physical inversion scheme for retrieving rain rate over the ocean. This report summarizes work accomplished or in progress during the first year of a three year grant. The emphasis during the first year has been on the validation and refinement of the rain rate algorithm published by Petty and on the analysis of independent data sets that can be used to help evaluate the performance of rain rate algorithms over remote areas of the ocean. Two articles in the area of global oceanic precipitation are attached.
A new parallelization scheme for adaptive mesh refinement
Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...
2016-05-06
Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less
A new parallelization scheme for adaptive mesh refinement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.
Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less
Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.
2008-01-01
The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution. PMID:18094468
Efficient Grammar Induction Algorithm with Parse Forests from Real Corpora
NASA Astrophysics Data System (ADS)
Kurihara, Kenichi; Kameya, Yoshitaka; Sato, Taisuke
The task of inducing grammar structures has received a great deal of attention. The reasons why researchers have studied are different; to use grammar induction as the first stage in building large treebanks or to make up better language models. However, grammar induction has inherent computational complexity. To overcome it, some grammar induction algorithms add new production rules incrementally. They refine the grammar while keeping their computational complexity low. In this paper, we propose a new efficient grammar induction algorithm. Although our algorithm is similar to algorithms which learn a grammar incrementally, our algorithm uses the graphical EM algorithm instead of the Inside-Outside algorithm. We report results of learning experiments in terms of learning speeds. The results show that our algorithm learns a grammar in constant time regardless of the size of the grammar. Since our algorithm decreases syntactic ambiguities in each step, our algorithm reduces required time for learning. This constant-time learning considerably affects learning time for larger grammars. We also reports results of evaluation of criteria to choose nonterminals. Our algorithm refines a grammar based on a nonterminal in each step. Since there can be several criteria to decide which nonterminal is the best, we evaluate them by learning experiments.
Dhagat, Urmi; Endo, Satoshi; Mamiya, Hiroaki; Hara, Akira; El-Kabbani, Ossama
2009-03-01
3(17)alpha-Hydroxysteroid dehydrogenase (AKR1C21) is a unique member of the aldo-keto reductase (AKR) superfamily owing to its ability to reduce 17-ketosteroids to 17alpha-hydroxysteroids, as opposed to other members of the AKR family, which can only produce 17beta-hydroxysteroids. In this paper, the crystal structure of a double mutant (G225P/G226P) of AKR1C21 in complex with the coenzyme NADP(+) and the inhibitor hexoestrol refined at 2.1 A resolution is presented. Kinetic analysis and molecular-modelling studies of 17alpha- and 17beta-hydroxysteroid substrates in the active site of AKR1C21 suggested that Gly225 and Gly226 play an important role in determining the substrate stereospecificity of the enzyme. Additionally, the G225P/G226P mutation of the enzyme reduced the affinity (K(m)) for both 3alpha- and 17alpha-hydroxysteroid substrates by up to 160-fold, indicating that these residues are critical for the binding of substrates.
Fulks, Michael; Kaufman, Valerie; Clark, Michael; Stout, Robert L
2017-01-01
- Further refine the independent value of NT-proBNP, accounting for the impact of other test results, in predicting all-cause mortality for individual life insurance applicants with and without heart disease. - Using the Social Security Death Master File and multivariate analysis, relative mortality was determined for 245,322 life insurance applicants ages 50 to 89 tested for NT-proBNP (almost all based on age and policy amount) along with other laboratory tests and measurement of blood pressure and BMI. - NT-proBNP values ≤75 pg/mL included the majority of applicants denying heart disease and had the lowest risk, while values >500 pg/mL for females and >300 pg/mL for males had very high relative risk. Those admitting to heart disease had a higher mortality risk for each band of NT-proBNP relative to those denying heart disease but had a similar and equally predictive risk curve. - NT-proBNP is a strong independent predictor of all-cause mortality in the absence or presence of known heart disease but the range of values associated with increased risk varies by sex.
Dynamic grid refinement for partial differential equations on parallel computers
NASA Technical Reports Server (NTRS)
Mccormick, S.; Quinlan, D.
1989-01-01
The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.
Conditional Random Field-Based Offline Map Matching for Indoor Environments
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-01-01
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892
Conditional Random Field-Based Offline Map Matching for Indoor Environments.
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-08-16
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.
Method of modifying a volume mesh using sheet insertion
Borden, Michael J [Albuquerque, NM; Shepherd, Jason F [Albuquerque, NM
2006-08-29
A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet insertion. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify (refine) the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of locating a sheet of hexahedral mesh elements, determining a plurality of hexahedral elements within the sheet to refine, shrinking the plurality of elements, and inserting a new sheet of hexahedral elements adjacently to modify the volume mesh. Additionally, another predetermined algorithm using mesh cutting may be followed to modify a volume mesh.
MreB: pilot or passenger of cell wall synthesis?
White, Courtney L; Gober, James W
2012-02-01
The discovery that the bacterial cell shape determinant MreB is related to actin spurred new insights into bacterial morphogenesis and development. The trafficking and mechanical roles of the eukaryotic cytoskeleton were hypothesized to have a functional ancestor in MreB based on evidence implicating MreB as an organizer of cell wall synthesis. Genetic, biochemical and cytological studies implicate MreB as a coordinator of a large multi-protein peptidoglycan (PG) synthesizing holoenzyme. Recent advances in microscopy and new biochemical evidence, however, suggest that MreB may function differently than previously envisioned. This review summarizes our evolving knowledge of MreB and attempts to refine the generalized model of the proteins organizing PG synthesis in bacteria. This is generally thought to be conserved among eubacteria and the majority of the discussion will focus on studies from a few well-studied model organisms. Copyright © 2011 Elsevier Ltd. All rights reserved.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England
The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} tomore » 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.« less
Deutsch, Maxime; Claiser, Nicolas; Pillet, Sébastien; Chumakov, Yurii; Becker, Pierre; Gillet, Jean Michel; Gillon, Béatrice; Lecomte, Claude; Souhassou, Mohamed
2012-11-01
New crystallographic tools were developed to access a more precise description of the spin-dependent electron density of magnetic crystals. The method combines experimental information coming from high-resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) in a unified model. A new algorithm that allows for a simultaneous refinement of the charge- and spin-density parameters against XRD and PND data is described. The resulting software MOLLYNX is based on the well known Hansen-Coppens multipolar model, and makes it possible to differentiate the electron spins. This algorithm is validated and demonstrated with a molecular crystal formed by a bimetallic chain, MnCu(pba)(H(2)O)(3)·2H(2)O, for which XRD and PND data are available. The joint refinement provides a more detailed description of the spin density than the refinement from PND data alone.
Cox, Zachary L; Lewis, Connie M; Lai, Pikki; Lenihan, Daniel J
2017-01-01
We aim to validate the diagnostic performance of the first fully automatic, electronic heart failure (HF) identification algorithm and evaluate the implementation of an HF Dashboard system with 2 components: real-time identification of decompensated HF admissions and accurate characterization of disease characteristics and medical therapy. We constructed an HF identification algorithm requiring 3 of 4 identifiers: B-type natriuretic peptide >400 pg/mL; admitting HF diagnosis; history of HF International Classification of Disease, Ninth Revision, diagnosis codes; and intravenous diuretic administration. We validated the diagnostic accuracy of the components individually (n = 366) and combined in the HF algorithm (n = 150) compared with a blinded provider panel in 2 separate cohorts. We built an HF Dashboard within the electronic medical record characterizing the disease and medical therapies of HF admissions identified by the HF algorithm. We evaluated the HF Dashboard's performance over 26 months of clinical use. Individually, the algorithm components displayed variable sensitivity and specificity, respectively: B-type natriuretic peptide >400 pg/mL (89% and 87%); diuretic (80% and 92%); and International Classification of Disease, Ninth Revision, code (56% and 95%). The HF algorithm achieved a high specificity (95%), positive predictive value (82%), and negative predictive value (85%) but achieved limited sensitivity (56%) secondary to missing provider-generated identification data. The HF Dashboard identified and characterized 3147 HF admissions over 26 months. Automated identification and characterization systems can be developed and used with a substantial degree of specificity for the diagnosis of decompensated HF, although sensitivity is limited by clinical data input. Copyright © 2016 Elsevier Inc. All rights reserved.
Iterative refinement of structure-based sequence alignments by Seed Extension
Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook
2009-01-01
Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133
Parodi, Stefano; Dosi, Corrado; Zambon, Antonella; Ferrari, Enrico; Muselli, Marco
2017-12-01
Identifying potential risk factors for problem gambling (PG) is of primary importance for planning preventive and therapeutic interventions. We illustrate a new approach based on the combination of standard logistic regression and an innovative method of supervised data mining (Logic Learning Machine or LLM). Data were taken from a pilot cross-sectional study to identify subjects with PG behaviour, assessed by two internationally validated scales (SOGS and Lie/Bet). Information was obtained from 251 gamblers recruited in six betting establishments. Data on socio-demographic characteristics, lifestyle and cognitive-related factors, and type, place and frequency of preferred gambling were obtained by a self-administered questionnaire. The following variables associated with PG were identified: instant gratification games, alcohol abuse, cognitive distortion, illegal behaviours and having started gambling with a relative or a friend. Furthermore, the combination of LLM and LR indicated the presence of two different types of PG, namely: (a) daily gamblers, more prone to illegal behaviour, with poor money management skills and who started gambling at an early age, and (b) non-daily gamblers, characterised by superstitious beliefs and a higher preference for immediate reward games. Finally, instant gratification games were strongly associated with the number of games usually played. Studies on gamblers habitually frequently betting shops are rare. The finding of different types of PG by habitual gamblers deserves further analysis in larger studies. Advanced data mining algorithms, like LLM, are powerful tools and potentially useful in identifying risk factors for PG.
Implementation of a three-qubit refined Deutsch Jozsa algorithm using SFG quantum logic gates
NASA Astrophysics Data System (ADS)
DelDuce, A.; Savory, S.; Bayvel, P.
2006-05-01
In this paper we present a quantum logic circuit which can be used for the experimental demonstration of a three-qubit solid state quantum computer based on a recent proposal of optically driven quantum logic gates. In these gates, the entanglement of randomly placed electron spin qubits is manipulated by optical excitation of control electrons. The circuit we describe solves the Deutsch problem with an improved algorithm called the refined Deutsch-Jozsa algorithm. We show that it is possible to select optical pulses that solve the Deutsch problem correctly, and do so without losing quantum information to the control electrons, even though the gate parameters vary substantially from one gate to another.
Value Addition to Cartosat-I Imagery
NASA Astrophysics Data System (ADS)
Mohan, M.
2014-11-01
In the sector of remote sensing applications, the use of stereo data is on the steady rise. An attempt is hereby made to develop a software suite specifically for exploitation of Cartosat-I data. A few algorithms to enhance the quality of basic Cartosat-I products will be presented. The algorithms heavily exploit the Rational Function Coefficients (RPCs) that are associated with the image. The algorithms include improving the geometric positioning through Bundle Block Adjustment and producing refined RPCs; generating portable stereo views using raw / refined RPCs autonomously; orthorectification and mosaicing; registering a monoscopic image rapidly with a single seed point. The outputs of these modules (including the refined RPCs) are in standard formats for further exploitation in 3rd party software. The design focus has been on minimizing the user-interaction and to customize heavily to suit the Indian context. The core libraries are in C/C++ and some of the applications come with user-friendly GUI. Further customization to suit a specific workflow is feasible as the requisite photogrammetric tools are in place and are continuously upgraded. The paper discusses the algorithms and the design considerations of developing the tools. The value-added products so produced using these tools will also be presented.
Unstructured Euler flow solutions using hexahedral cell refinement
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1991-01-01
An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.
DOT National Transportation Integrated Search
1994-12-01
THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Adaptive Grid Refinement for Atmospheric Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
van Hooft, Antoon; van Heerwaarden, Chiel; Popinet, Stephane; van der linden, Steven; de Roode, Stephan; van de Wiel, Bas
2017-04-01
We validate and benchmark an adaptive mesh refinement (AMR) algorithm for numerical simulations of the atmospheric boundary layer (ABL). The AMR technique aims to distribute the computational resources efficiently over a domain by refining and coarsening the numerical grid locally and in time. This can be beneficial for studying cases in which length scales vary significantly in time and space. We present the results for a case describing the growth and decay of a convective boundary layer. The AMR results are benchmarked against two runs using a fixed, fine meshed grid. First, with the same numerical formulation as the AMR-code and second, with a code dedicated to ABL studies. Compared to the fixed and isotropic grid runs, the AMR algorithm can coarsen and refine the grid such that accurate results are obtained whilst using only a fraction of the grid cells. Performance wise, the AMR run was cheaper than the fixed and isotropic grid run with similar numerical formulations. However, for this specific case, the dedicated code outperformed both aforementioned runs.
The ranking algorithm of the Coach browser for the UMLS metathesaurus.
Harbourt, A. M.; Syed, E. J.; Hole, W. T.; Kingsland, L. C.
1993-01-01
This paper presents the novel ranking algorithm of the Coach Metathesaurus browser which is a major module of the Coach expert search refinement program. An example shows how the ranking algorithm can assist in creating a list of candidate terms useful in augmenting a suboptimal Grateful Med search of MEDLINE. PMID:8130570
Text Extraction from Scene Images by Character Appearance and Structure Modeling
Yi, Chucai; Tian, Yingli
2012-01-01
In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information.
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft's algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms.
Early Performance Results from the GOES-R Product Generation System
NASA Astrophysics Data System (ADS)
Marley, S.; Weiner, A.; Kalluri, S. N.; Hansen, D.; Dittberner, G.
2013-12-01
Enhancements to remote sensing capabilities for the next generation of Geostationary Operational Environmental Satellite (GOES R-series) scheduled to be launched in 2015 require high performance computing capabilities to output meteorological observations and products at low latency compared to the legacy processing systems. GOES R-series (GOES-R, -S, -T, and -U) represents a generational change in both spacecraft and instrument capability, and the GOES Re-Broadcast (GRB) data which contains calibrated and navigated radiances from all the instruments will be at a data rate of 31 Mb/sec compared to the current 2.11 Mb/sec from existing GOES satellites. To keep up with the data processing rates, the Product Generation (PG) system in the ground segment is designed on a Service Based Architecture (SBA). Each algorithm is executed as a service and subscribes to the data it needs to create higher level products via an enterprise service bus. Various levels of product data are published and retrieved from a data fabric. Together, the SBA and the data fabric provide a flexible, scalable, high performance architecture that meets the needs of product processing now and can grow to accommodate new algorithms in the future. The algorithms are linked together in a precedence chain starting from Level 0 to Level 1b and higher order Level 2 products that are distributed to data distribution nodes for external users. Qualification testing for more than half the product algorithms has so far been completed the PG system.
The slip-and-slide algorithm: a refinement protocol for detector geometry
Ginn, Helen Mary; Stuart, David Ian
2017-01-01
Geometry correction is traditionally plagued by mis-fitting of correlated parameters, leading to local minima which prevent further improvements. Segmented detectors pose an enhanced risk of mis-fitting: even a minor confusion of detector distance and panel separation can prevent improvement in data quality. The slip-and-slide algorithm breaks down effects of the correlated parameters and their associated target functions in a fundamental shift in the approach to the problem. Parameters are never refined against the components of the data to which they are insensitive, providing a dramatic boost in the exploitation of information from a very small number of diffraction patterns. This algorithm can be applied to exploit the adherence of the spot-finding results prior to indexing to a given lattice using unit-cell dimensions as a restraint. Alternatively, it can be applied to the predicted spot locations and the observed reflection positions after indexing from a smaller number of images. Thus, the indexing rate can be boosted by 5.8% using geometry refinement from only 125 indexed patterns or 500 unindexed patterns. In one example of cypovirus type 17 polyhedrin diffraction at the Linac Coherent Light Source, this geometry refinement reveals a detector tilt of 0.3° (resulting in a maximal Z-axis error of ∼0.5 mm from an average detector distance of ∼90 mm) whilst treating all panels independently. Re-indexing and integrating with updated detector geometry reduces systematic errors providing a boost in anomalous signal of sulfur atoms by 20%. Due to the refinement of decoupled parameters, this geometry method also reaches convergence. PMID:29091058
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Moghadasi, Mohammad; Kozakov, Dima; Mamonov, Artem B.; Vakili, Pirooz; Vajda, Sandor; Paschalidis, Ioannis Ch.
2013-01-01
We introduce a message-passing algorithm to solve the Side Chain Positioning (SCP) problem. SCP is a crucial component of protein docking refinement, which is a key step of an important class of problems in computational structural biology called protein docking. We model SCP as a combinatorial optimization problem and formulate it as a Maximum Weighted Independent Set (MWIS) problem. We then employ a modified and convergent belief-propagation algorithm to solve a relaxation of MWIS and develop randomized estimation heuristics that use the relaxed solution to obtain an effective MWIS feasible solution. Using a benchmark set of protein complexes we demonstrate that our approach leads to more accurate docking predictions compared to a baseline algorithm that does not solve the SCP. PMID:23515575
Possible quantum algorithm for the Lipshitz-Sarkar-Steenrod square for Khovanov homology
NASA Astrophysics Data System (ADS)
Ospina, Juan
2013-05-01
Recently the celebrated Khovanov Homology was introduced as a target for Topological Quantum Computation given that the Khovanov Homology provides a generalization of the Jones polynomal and then it is possible to think about of a generalization of the Aharonov.-Jones-Landau algorithm. Recently, Lipshitz and Sarkar introduced a space-level refinement of Khovanov homology. which is called Khovanov Homotopy. This refinement induces a Steenrod square operation Sq2 on Khovanov homology which they describe explicitly and then some computations of Sq2 were presented. Particularly, examples of links with identical integral Khovanov homology but with distinct Khovanov homotopy types were showed. In the presente work we will introduce possible quantum algorithms for the Lipshitz- Sarkar-Steenrod square for Khovanov Homolog and their possible simulations using computer algebra.
NASA Technical Reports Server (NTRS)
Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve
2012-01-01
Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).
NASA Astrophysics Data System (ADS)
Reyes López, Yaidel; Roose, Dirk; Recarey Morfa, Carlos
2013-05-01
In this paper, we present a dynamic refinement algorithm for the smoothed particle Hydrodynamics (SPH) method. An SPH particle is refined by replacing it with smaller daughter particles, which positions are calculated by using a square pattern centered at the position of the refined particle. We determine both the optimal separation and the smoothing distance of the new particles such that the error produced by the refinement in the gradient of the kernel is small and possible numerical instabilities are reduced. We implemented the dynamic refinement procedure into two different models: one for free surface flows, and one for post-failure flow of non-cohesive soil. The results obtained for the test problems indicate that using the dynamic refinement procedure provides a good trade-off between the accuracy and the cost of the simulations.
Jorge, Antonio José Lagoeiro; Freire, Monica Di Calafiori; Ribeiro, Mário Luiz; Fernandes, Luiz Cláudio Maluhy; Lanzieri, Pedro Gemal; Jorge, Bruno Afonso Lagoeiro; Lage, João Gabriel B; Rosa, Maria Luiza Garcia; Mesquita, Evandro Tinoco
2013-09-01
Heart failure with preserved ejection fraction (HFPEF) is a highly prevalent syndrome that is difficult to diagnose in outpatients. The measurement of B-type natriuretic peptide (BNP) may be useful in the diagnosis of HFPEF, but with a different cutoff from that used in the emergency room. The aim of this study was to identify the BNP cutoff for a diagnosis of HFPEF in outpatients. This prospective, observational study enrolled 161 outpatients (aged 68.1±11.5 years, 72% female) with suspected HFPEF. Patients underwent ECG, tissue Doppler imaging, and plasma BNP measurement, and were classified in accordance with algorithms for the diagnosis of HFPEF. HFPEF was confirmed in 49 patients, who presented higher BNP values (mean 144.4pg/ml, median 113pg/ml, vs. mean 27.6pg/ml, median 16.7pg/ml, p<0.0001). The results showed a significant correlation between BNP levels and left atrial volume index (r=0.554, p<0.0001), age (r=0.452; p<0.0001) and E/E' ratio (r=0.345, p<0.0001). The area under the ROC curve for BNP to detect HFPEF was 0.92 (95% confidence interval: 0.87-0.96; p<0.001), and 51pg/ml was identified as the best cutoff to detect HFPEF, with sensitivity of 86%, specificity of 86% and accuracy of 86%. BNP levels in outpatients with HFPEF are significantly higher than in those without. A cutoff value of 51pg/ml had the best diagnostic accuracy in outpatients. Copyright © 2012 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.
Using Induction to Refine Information Retrieval Strategies
NASA Technical Reports Server (NTRS)
Baudin, Catherine; Pell, Barney; Kedar, Smadar
1994-01-01
Conceptual information retrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual information retrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrieved information. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.
An exact peak capturing and essentially oscillation-free (EPCOF) algorithm, consisting of advection-dispersion decoupling, backward method of characteristics, forward node tracking, and adaptive local grid refinement, is developed to solve transport equations. This algorithm repr...
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heberle, Frederick A; Pan, Jianjun; Standaert, Robert F
2012-01-01
Some of our recent work has resulted in the detailed structures of fully hydrated, fluid phase phosphatidylcholine (PC) and phosphatidylglycerol (PG) bilayers. These structures were obtained from the joint refinement of small-angle neutron and X-ray data using the scattering density profile (SDP) models developed by Ku erka et al. (Ku erka et al. 2012; Ku erka et al. 2008). In this review, we first discuss models for the standalone analysis of neutron or X-ray scattering data from bilayers, and assess the strengths and weaknesses inherent in these models. In particular, it is recognized that standalone data do not contain enoughmore » information to fully resolve the structure of inherently disordered fluid bilayers, and therefore may not provide a robust determination of bilayer structural parameters, including the much sought after area per lipid. We then discuss the development of matter density-based models (including the SDP model) that allow for the joint refinement of different contrast neutron and X-ray data sets, as well as the implementation of local volume conservation in the unit cell (i.e., ideal packing). Such models provide natural definitions of bilayer thicknesses (most importantly the hydrophobic and Luzzati thicknesses) in terms of Gibbs dividing surfaces, and thus allow for the robust determination of lipid areas through equivalent slab relationships between bilayer thickness and lipid volume. In the final section of this review, we discuss some of the significant findings/features pertaining to structures of PC and PG bilayers as determined from SDP model analyses.« less
Fast-kick-off monotonically convergent algorithm for searching optimal control fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Sheng-Lun; Ho, Tak-San; Rabitz, Herschel
2011-09-15
This Rapid Communication presents a fast-kick-off search algorithm for quickly finding optimal control fields in the state-to-state transition probability control problems, especially those with poorly chosen initial control fields. The algorithm is based on a recently formulated monotonically convergent scheme [T.-S. Ho and H. Rabitz, Phys. Rev. E 82, 026703 (2010)]. Specifically, the local temporal refinement of the control field at each iteration is weighted by a fractional inverse power of the instantaneous overlap of the backward-propagating wave function, associated with the target state and the control field from the previous iteration, and the forward-propagating wave function, associated with themore » initial state and the concurrently refining control field. Extensive numerical simulations for controls of vibrational transitions and ultrafast electron tunneling show that the new algorithm not only greatly improves the search efficiency but also is able to attain good monotonic convergence quality when further frequency constraints are required. The algorithm is particularly effective when the corresponding control dynamics involves a large number of energy levels or ultrashort control pulses.« less
NASA Technical Reports Server (NTRS)
Key, Jeff; Maslanik, James; Steffen, Konrad
1995-01-01
During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).
Two-Step Approach for the Prediction of Future Type 2 Diabetes Risk
Abdul-Ghani, Muhammad A.; Abdul-Ghani, Tamam; Stern, Michael P.; Karavic, Jasmina; Tuomi, Tiinamaija; Bo, Insoma; DeFronzo, Ralph A.; Groop, Leif
2011-01-01
OBJECTIVE To develop a model for the prediction of type 2 diabetes mellitus (T2DM) risk on the basis of a multivariate logistic model and 1-h plasma glucose concentration (1-h PG). RESEARCH DESIGN AND METHODS The model was developed in a cohort of 1,562 nondiabetic subjects from the San Antonio Heart Study (SAHS) and validated in 2,395 nondiabetic subjects in the Botnia Study. A risk score on the basis of anthropometric parameters, plasma glucose and lipid profile, and blood pressure was computed for each subject. Subjects with a risk score above a certain cut point were considered to represent high-risk individuals, and their 1-h PG concentration during the oral glucose tolerance test was used to further refine their future T2DM risk. RESULTS We used the San Antonio Diabetes Prediction Model (SADPM) to generate the initial risk score. A risk-score value of 0.065 was found to be an optimal cut point for initial screening and selection of high-risk individuals. A 1-h PG concentration >140 mg/dL in high-risk individuals (whose risk score was >0.065) was the optimal cut point for identification of subjects at increased risk. The two cut points had 77.8, 77.4, and 44.8% (for the SAHS) and 75.8, 71.6, and 11.9% (for the Botnia Study) sensitivity, specificity, and positive predictive value, respectively, in the SAHS and Botnia Study. CONCLUSIONS A two-step model, based on the combination of the SADPM and 1-h PG, is a useful tool for the identification of high-risk Mexican-American and Caucasian individuals. PMID:21788628
Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan
2016-04-22
The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.
Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information
Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing
2016-01-01
Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
Road extraction from aerial images using a region competition algorithm.
Amo, Miriam; Martínez, Fernando; Torre, Margarita
2006-05-01
In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.
Modeling flow at the nozzle of a solid rocket motor
NASA Technical Reports Server (NTRS)
Chow, Alan S.; Jin, Kang-Ren
1991-01-01
The mechanical behavior of a rocket motor internal flow field results in a system of nonlinear partial differential equations which can be solved numerically. The accuracy and the convergence of the solution of the system of equations depends largely on how precisely the sharp gradients can be resolved. An adaptive grid generation scheme is incorporated into the computer algorithm to enhance the capability of numerical modeling. With this scheme, the grid is refined as the solution evolves. This scheme significantly improves the methodology of solving flow problems in rocket nozzle by putting the refinement part of grid generation into the computer algorithm.
Crash testing difference-smoothing algorithm on a large sample of simulated light curves from TDC1
NASA Astrophysics Data System (ADS)
Rathna Kumar, S.
2017-09-01
In this work, we propose refinements to the difference-smoothing algorithm for the measurement of time delay from the light curves of the images of a gravitationally lensed quasar. The refinements mainly consist of a more pragmatic approach to choose the smoothing time-scale free parameter, generation of more realistic synthetic light curves for the estimation of time delay uncertainty and using a plot of normalized χ2 computed over a wide range of trial time delay values to assess the reliability of a measured time delay and also for identifying instances of catastrophic failure. We rigorously tested the difference-smoothing algorithm on a large sample of more than thousand pairs of simulated light curves having known true time delays between them from the two most difficult 'rungs' - rung3 and rung4 - of the first edition of Strong Lens Time Delay Challenge (TDC1) and found an inherent tendency of the algorithm to measure the magnitude of time delay to be higher than the true value of time delay. However, we find that this systematic bias is eliminated by applying a correction to each measured time delay according to the magnitude and sign of the systematic error inferred by applying the time delay estimator on synthetic light curves simulating the measured time delay. Following these refinements, the TDC performance metrics for the difference-smoothing algorithm are found to be competitive with those of the best performing submissions of TDC1 for both the tested 'rungs'. The MATLAB codes used in this work and the detailed results are made publicly available.
INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL
The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
Incremental triangulation by way of edge swapping and local optimization
NASA Technical Reports Server (NTRS)
Wiltberger, N. Lyn
1994-01-01
This document is intended to serve as an installation, usage, and basic theory guide for the two dimensional triangulation software 'HARLEY' written for the Silicon Graphics IRIS workstation. This code consists of an incremental triangulation algorithm based on point insertion and local edge swapping. Using this basic strategy, several types of triangulations can be produced depending on user selected options. For example, local edge swapping criteria can be chosen which minimizes the maximum interior angle (a MinMax triangulation) or which maximizes the minimum interior angle (a MaxMin or Delaunay triangulation). It should be noted that the MinMax triangulation is generally only locally optical (not globally optimal) in this measure. The MaxMin triangulation, however, is both locally and globally optical. In addition, Steiner triangulations can be constructed by inserting new sites at triangle circumcenters followed by edge swapping based on the MaxMin criteria. Incremental insertion of sites also provides flexibility in choosing cell refinement criteria. A dynamic heap structure has been implemented in the code so that once a refinement measure is specified (i.e., maximum aspect ratio or some measure of a solution gradient for the solution adaptive grid generation) the cell with the largest value of this measure is continually removed from the top of the heap and refined. The heap refinement strategy allows the user to specify either the number of cells desired or refine the mesh until all cell refinement measures satisfy a user specified tolerance level. Since the dynamic heap structure is constantly updated, the algorithm always refines the particular cell in the mesh with the largest refinement criteria value. The code allows the user to: triangulate a cloud of prespecified points (sites), triangulate a set of prespecified interior points constrained by prespecified boundary curve(s), Steiner triangulate the interior/exterior of prespecified boundary curve(s), refine existing triangulations based on solution error measures, and partition meshes based on the Cuthill-McKee, spectral, and coordinate bisection strategies.
Use of a Rapid Ethylene Glycol Assay: a 4-Year Retrospective Study at an Academic Medical Center.
Rooney, Sydney L; Ehlers, Alexandra; Morris, Cory; Drees, Denny; Davis, Scott R; Kulhavy, Jeff; Krasowski, Matthew D
2016-06-01
Ethylene glycol (EG) is a common cause of toxic ingestions. Gas chromatography (GC)-based laboratory assays are the gold standard for diagnosing EG intoxication. However, GC requires specialized instrumentation and technical expertise that limits feasibility for many clinical laboratories. The objective of this retrospective study was to determine the utility of incorporating a rapid EG assay for management of cases with suspected EG poisoning. The University of Iowa Hospitals and Clinics core clinical laboratory adapted a veterinary EG assay (Catachem, Inc.) for the Roche Diagnostics cobas 8000 c502 analyzer and incorporated this assay in an osmolal gap-based algorithm for potential toxic alcohol/glycol ingestions. The main limitation is that high concentrations of propylene glycol (PG), while readily identifiable by reaction rate kinetics, can interfere with EG measurement. The clinical laboratory had the ability to perform GC for EG and PG, if needed. A total of 222 rapid EG and 24 EG/PG GC analyses were documented in 106 patient encounters. Of ten confirmed EG ingestions, eight cases were managed entirely with the rapid EG assay. PG interference was evident in 25 samples, leading to 8 GC analyses to rule out the presence of EG. Chart review of cases with negative rapid EG assay results showed no evidence of false negatives. The results of this study highlight the use of incorporating a rapid EG assay for the diagnosis and management of suspected EG toxicity by decreasing the reliance on GC. Future improvements would involve rapid EG assays that completely avoid interference by PG.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Zachary C.; Richard, Ryan M.; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu
2013-12-28
An implementation of Ewald summation for use in mixed quantum mechanics/molecular mechanics (QM/MM) calculations is presented, which builds upon previous work by others that was limited to semi-empirical electronic structure for the QM region. Unlike previous work, our implementation describes the wave function's periodic images using “ChElPG” atomic charges, which are determined by fitting to the QM electrostatic potential evaluated on a real-space grid. This implementation is stable even for large Gaussian basis sets with diffuse exponents, and is thus appropriate when the QM region is described by a correlated wave function. Derivatives of the ChElPG charges with respect tomore » the QM density matrix are a potentially serious bottleneck in this approach, so we introduce a ChElPG algorithm based on atom-centered Lebedev grids. The ChElPG charges thus obtained exhibit good rotational invariance even for sparse grids, enabling significant cost savings. Detailed analysis of the optimal choice of user-selected Ewald parameters, as well as timing breakdowns, is presented.« less
Short-term prediction of chaotic time series by using RBF network with regression weights.
Rojas, I; Gonzalez, J; Cañas, A; Diaz, A F; Rojas, F J; Rodriguez, M
2000-10-01
We propose a framework for constructing and training a radial basis function (RBF) neural network. The structure of the gaussian functions is modified using a pseudo-gaussian function (PG) in which two scaling parameters sigma are introduced, which eliminates the symmetry restriction and provides the neurons in the hidden layer with greater flexibility with respect to function approximation. We propose a modified PG-BF (pseudo-gaussian basis function) network in which the regression weights are used to replace the constant weights in the output layer. For this purpose, a sequential learning algorithm is presented to adapt the structure of the network, in which it is possible to create a new hidden unit and also to detect and remove inactive units. A salient feature of the network systems is that the method used for calculating the overall output is the weighted average of the output associated with each receptive field. The superior performance of the proposed PG-BF system over the standard RBF are illustrated using the problem of short-term prediction of chaotic time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leimkuhler, B.; Hermans, J.; Skeel, R.D.
A workshop was held on algorithms and parallel implementations for macromolecular dynamics, protein folding, and structural refinement. This document contains abstracts and brief reports from that workshop.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm
NASA Astrophysics Data System (ADS)
Küchlin, Stephan; Jenny, Patrick
2018-06-01
Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.
Some observations on mesh refinement schemes applied to shock wave phenomena
NASA Technical Reports Server (NTRS)
Quirk, James J.
1995-01-01
This workshop's double-wedge test problem is taken from one of a sequence of experiments which were performed in order to classify the various canonical interactions between a planar shock wave and a double wedge. Therefore to build up a reasonably broad picture of the performance of our mesh refinement algorithm we have simulated three of these experiments and not just the workshop case. Here, using the results from these simulations together with their experimental counterparts, we make some general observations concerning the development of mesh refinement schemes for shock wave phenomena.
Refined genetic algorithm -- Economic dispatch example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2016-08-18
In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Block structured adaptive mesh and time refinement for hybrid, hyperbolic + N-body systems
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Colella, Phillip
2007-11-01
We present a new numerical algorithm for the solution of coupled collisional and collisionless systems, based on the block structured adaptive mesh and time refinement strategy (AMR). We describe the issues associated with the discretization of the system equations and the synchronization of the numerical solution on the hierarchy of grid levels. We implement a code based on a higher order, conservative and directionally unsplit Godunov’s method for hydrodynamics; a symmetric, time centered modified symplectic scheme for collisionless component; and a multilevel, multigrid relaxation algorithm for the elliptic equation coupling the two components. Numerical results that illustrate the accuracy of the code and the relative merit of various implemented schemes are also presented.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Gueddida, Saber; Yan, Zeyin; Kibalin, Iurii; Voufack, Ariste Bolivard; Claiser, Nicolas; Souhassou, Mohamed; Lecomte, Claude; Gillon, Béatrice; Gillet, Jean-Michel
2018-04-28
In this paper, we propose a simple cluster model with limited basis sets to reproduce the unpaired electron distributions in a YTiO 3 ferromagnetic crystal. The spin-resolved one-electron-reduced density matrix is reconstructed simultaneously from theoretical magnetic structure factors and directional magnetic Compton profiles using our joint refinement algorithm. This algorithm is guided by the rescaling of basis functions and the adjustment of the spin population matrix. The resulting spin electron density in both position and momentum spaces from the joint refinement model is in agreement with theoretical and experimental results. Benefits brought from magnetic Compton profiles to the entire spin density matrix are illustrated. We studied the magnetic properties of the YTiO 3 crystal along the Ti-O 1 -Ti bonding. We found that the basis functions are mostly rescaled by means of magnetic Compton profiles, while the molecular occupation numbers are mainly modified by the magnetic structure factors.
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Quinlan, Scott C; Cheng, Wendy Y; Ishihara, Lianna; Irizarry, Michael C; Holick, Crystal N; Duh, Mei Sheng
2016-04-01
The aim of this study was to develop and validate an insurance claims-based algorithm for identifying urinary retention (UR) in epilepsy patients receiving antiepileptic drugs to facilitate safety monitoring. Data from the HealthCore Integrated Research Database(SM) in 2008-2011 (retrospective) and 2012-2013 (prospective) were used to identify epilepsy patients with UR. During the retrospective phase, three algorithms identified potential UR: (i) UR diagnosis code with a catheterization procedure code; (ii) UR diagnosis code alone; or (iii) diagnosis with UR-related symptoms. Medical records for 50 randomly selected patients satisfying ≥1 algorithm were reviewed by urologists to ascertain UR status. Positive predictive value (PPV) and 95% confidence intervals (CI) were calculated for the three component algorithms and the overall algorithm (defined as satisfying ≥1 component algorithms). Algorithms were refined using urologist review notes. In the prospective phase, the UR algorithm was refined using medical records for an additional 150 cases. In the retrospective phase, the PPV of the overall algorithm was 72.0% (95%CI: 57.5-83.8%). Algorithm 3 performed poorly and was dropped. Algorithm 1 was unchanged; urinary incontinence and cystitis were added as exclusionary diagnoses to Algorithm 2. The PPV for the modified overall algorithm was 89.2% (74.6-97.0%). In the prospective phase, the PPV for the modified overall algorithm was 76.0% (68.4-82.6%). Upon adding overactive bladder, nocturia and urinary frequency as exclusionary diagnoses, the PPV for the final overall algorithm was 81.9% (73.7-88.4%). The current UR algorithm yielded a PPV > 80% and could be used for more accurate identification of UR among epilepsy patients in a large claims database. Copyright © 2016 John Wiley & Sons, Ltd.
Low-dose 4D cardiac imaging in small animals using dual source micro-CT
NASA Astrophysics Data System (ADS)
Holbrook, M.; Clark, D. P.; Badea, C. T.
2018-01-01
Micro-CT is widely used in preclinical studies, generating substantial interest in extending its capabilities in functional imaging applications such as blood perfusion and cardiac function. However, imaging cardiac structure and function in mice is challenging due to their small size and rapid heart rate. To overcome these challenges, we propose and compare improvements on two strategies for cardiac gating in dual-source, preclinical micro-CT: fast prospective gating (PG) and uncorrelated retrospective gating (RG). These sampling strategies combined with a sophisticated iterative image reconstruction algorithm provide faster acquisitions and high image quality in low-dose 4D (i.e. 3D + Time) cardiac micro-CT. Fast PG is performed under continuous subject rotation which results in interleaved projection angles between cardiac phases. Thus, fast PG provides a well-sampled temporal average image for use as a prior in iterative reconstruction. Uncorrelated RG incorporates random delays during sampling to prevent correlations between heart rate and sampling rate. We have performed both simulations and animal studies to validate these new sampling protocols. Sampling times for 1000 projections using fast PG and RG were 2 and 3 min, respectively, and the total dose was 170 mGy each. Reconstructions were performed using a 4D iterative reconstruction technique based on the split Bregman method. To examine undersampling robustness, subsets of 500 and 250 projections were also used for reconstruction. Both sampling strategies in conjunction with our iterative reconstruction method are capable of resolving cardiac phases and provide high image quality. In general, for equal numbers of projections, fast PG shows fewer errors than RG and is more robust to undersampling. Our results indicate that only 1000-projection based reconstruction with fast PG satisfies a 5% error criterion in left ventricular volume estimation. These methods promise low-dose imaging with a wide range of preclinical applications in cardiac imaging.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.
Wilinska, Malgorzata E; Budiman, Erwin S; Taub, Marc B; Elleri, Daniela; Allen, Janet M; Acerini, Carlo L; Dunger, David B; Hovorka, Roman
2009-09-01
Hypoglycemia and hyperglycemia during closed-loop insulin delivery based on subcutaneous (SC) glucose sensing may arise due to (1) overdosing and underdosing of insulin by control algorithm and (2) difference between plasma glucose (PG) and sensor glucose, which may be transient (kinetics origin and sensor artifacts) or persistent (calibration error [CE]). Using in silico testing, we assessed hypoglycemia and hyperglycemia incidence during over-night closed loop. Additionally, a comparison was made against incidence observed experimentally during open-loop single-night in-clinic studies in young people with type 1 diabetes mellitus (T1DM) treated by continuous SC insulin infusion. Simulation environment comprising 18 virtual subjects with T1DM was used to simulate overnight closed-loop study with a model predictive control (MPC) algorithm. A 15 h experiment started at 17:00 and ended at 08:00 the next day. Closed loop commenced at 21:00 and continued for 11 h. At 18:00, protocol included meal (50 g carbohydrates) accompanied by prandial insulin. The MPC algorithm advised on insulin infusion every 15 min. Sensor glucose was obtained by combining model-calculated noise-free interstitial glucose with experimentally derived transient and persistent sensor artifacts associated with FreeStyle Navigator (FSN). Transient artifacts were obtained from FSN sensor pairs worn by 58 subjects with T1DM over 194 nighttime periods. Persistent difference due to FSN CE was quantified from 585 FSN sensor insertions, yielding 1421 calibration sessions from 248 subjects with diabetes. Episodes of severe (PG < or = 36 mg/dl) and significant (PG < or = 45 mg/dl) hypoglycemia and significant hyperglycemia (PG > or = 300 mg/dl) were extracted from 18,000 simulated closed-loop nights. Severe hypoglycemia was not observed when FSN CE was less than 45%. Hypoglycemia and hyperglycemia incidence during open loop was assessed from 21 overnight studies in 17 young subjects with T1DM (8 males; 13.5 +/- 3.6 years of age; body mass index 21.0 +/- 4.0 kg/m2; duration diabetes 6.4 +/- 4.1 years; hemoglobin A1c 8.5% +/- 1.8%; mean +/- standard deviation) participating in the Artificial Pancreas Project at Cambridge. Severe and significant hypoglycemia during simulated closed loop occurred 0.75 and 17.11 times per 100 person years compared to 1739 and 3479 times per 100 person years during experimental open loop, respectively. Significant hyperglycemia during closed loop and open loop occurred 75 and 15,654 times per 100 person years, respectively. The incidence of severe and significant hypoglycemia reduced 2300- and 200-fold, respectively, during stimulated overnight closed loop with MPC compared to that observed during open-loop overnight clinical studies in young subjects with T1DM. Hyperglycemia was 200 times less likely. Overnight closed loop with the FSN and the MPC algorithm is expected to reduce substantially the risk of hypoglycemia and hyperglycemia. 2009 Diabetes Technology Society.
Transitioning from Software Requirements Models to Design Models
NASA Technical Reports Server (NTRS)
Lowry, Michael (Technical Monitor); Whittle, Jon
2003-01-01
Summary: 1. Proof-of-concept of state machine synthesis from scenarios - CTAS case study. 2. CTAS team wants to use the syntheses algorithm to validate trajectory generation. 3. Extending synthesis algorithm towards requirements validation: (a) scenario relationships' (b) methodology for generalizing/refining scenarios, and (c) interaction patterns to control synthesis. 4. Initial ideas tested on conflict detection scenarios.
Evaluation of a Didactic Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Esteban-Sánchez, Natalia; Pizarro, Celeste; Velázquez-Iturbide, J. Ángel
2014-01-01
An evaluation of the educational effectiveness of a didactic method for the active learning of greedy algorithms is presented. The didactic method sets students structured-inquiry challenges to be addressed with a specific experimental method, supported by the interactive system GreedEx. This didactic method has been refined over several years of…
2005-01-01
Interface Compatibility); the tool is written in Ocaml [10], and the symbolic algorithms for interface compatibility and refinement are built on top...automata for a fire detection and reporting system. be encoded in the input language of the tool TIC. The refinement of sociable interfaces is discussed...are closely related to the I/O Automata Language (IOA) of [11]. Interface models are games between Input and Output, and in the models, it is es
Mesh Generation via Local Bisection Refinement of Triangulated Grids
2015-06-01
Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented
Multiscale Monte Carlo equilibration: Two-color QCD with two fermion flavors
Detmold, William; Endres, Michael G.
2016-12-02
In this study, we demonstrate the applicability of a recently proposed multiscale thermalization algorithm to two-color quantum chromodynamics (QCD) with two mass-degenerate fermion flavors. The algorithm involves refining an ensemble of gauge configurations that had been generated using a renormalization group (RG) matched coarse action, thereby producing a fine ensemble that is close to the thermalized distribution of a target fine action; the refined ensemble is subsequently rethermalized using conventional algorithms. Although the generalization of this algorithm from pure Yang-Mills theory to QCD with dynamical fermions is straightforward, we find that in the latter case, the method is susceptible tomore » numerical instabilities during the initial stages of rethermalization when using the hybrid Monte Carlo algorithm. We find that these instabilities arise from large fermion forces in the evolution, which are attributed to an accumulation of spurious near-zero modes of the Dirac operator. We propose a simple strategy for curing this problem, and demonstrate that rapid thermalization--as probed by a variety of gluonic and fermionic operators--is possible with the use of this solution. Also, we study the sensitivity of rethermalization rates to the RG matching of the coarse and fine actions, and identify effective matching conditions based on a variety of measured scales.« less
Orthogonal polynomials for refinable linear functionals
NASA Astrophysics Data System (ADS)
Laurie, Dirk; de Villiers, Johan
2006-12-01
A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
NASA Technical Reports Server (NTRS)
Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.
1991-01-01
An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.
Osmium isotope and highly siderophile element systematics of the lunar crust
NASA Astrophysics Data System (ADS)
Day, James M. D.; Walker, Richard J.; James, Odette B.; Puchtel, Igor S.
2010-01-01
Coupled 187Os/ 188Os and highly siderophile element (HSE: Os, Ir, Ru, Pt, Pd, and Re) abundance data are reported for pristine lunar crustal rocks 60025, 62255, 65315 (ferroan anorthosites, FAN) and 76535, 78235, 77215 and a norite clast in 15455 (magnesian-suite rocks, MGS). Osmium isotopes permit more refined discrimination than previously possible of samples that have been contaminated by meteoritic additions and the new results show that some rocks, previously identified as pristine, contain meteorite-derived HSE. Low HSE abundances in FAN and MGS rocks are consistent with derivation from a strongly HSE-depleted lunar mantle. At the time of formation, the lunar floatation crust, represented by FAN, had 1.4 ± 0.3 pg g - 1 Os, 1.5 ± 0.6 pg g - 1 Ir, 6.8 ± 2.7 pg g - 1 Ru, 16 ± 15 pg g - 1 Pt, 33 ± 30 pg g - 1 Pd and 0.29 ± 0.10 pg g - 1 Re (˜ 0.00002 × CI) and Re/Os ratios that were modestly elevated ( 187Re/ 188Os = 0.6 to 1.7) relative to CI chondrites. MGS samples are, on average, characterised by more elevated HSE abundances (˜ 0.00007 × CI) compared with FAN. This either reflects contrasting mantle-source HSE characteristics of FAN and MGS rocks, or different mantle-crust HSE fractionation behaviour during production of these lithologies. Previous studies of lunar impact-melt rocks have identified possible elevated Ru and Pd in lunar crustal target rocks. The new results provide no supporting evidence for such enrichments. If maximum estimates for HSE in the lunar mantle are compared with FAN and MGS averages, crust-mantle concentration ratios ( D-values) must be ≤ 0.3. Such D-values are broadly similar to those estimated for partitioning between the terrestrial crust and upper mantle, with the notable exception of Re. Given the presumably completely different mode of origin for the primary lunar floatation crust and tertiary terrestrial continental crust, the potential similarities in crust-mantle HSE partitioning for the Earth and Moon are somewhat surprising. Low HSE abundances in the lunar crust, coupled with estimates of HSE concentrations in the lunar mantle implies there may be a 'missing component' of late-accreted materials (as much as 95%) to the Moon if the Earth/Moon mass-flux estimates are correct and terrestrial mantle HSE abundances were established by late accretion.
Osmium isotope and highly siderophile element systematics of the lunar crust
Day, J.M.D.; Walker, R.J.; James, O.B.; Puchtel, I.S.
2010-01-01
Coupled 187Os/188Os and highly siderophile element (HSE: Os, Ir, Ru, Pt, Pd, and Re) abundance data are reported for pristine lunar crustal rocks 60025, 62255, 65315 (ferroan anorthosites, FAN) and 76535, 78235, 77215 and a norite clast in 15455 (magnesian-suite rocks, MGS). Osmium isotopes permit more refined discrimination than previously possible of samples that have been contaminated by meteoritic additions and the new results show that some rocks, previously identified as pristine, contain meteorite-derived HSE. Low HSE abundances in FAN and MGS rocks are consistent with derivation from a strongly HSE-depleted lunar mantle. At the time of formation, the lunar floatation crust, represented by FAN, had 1.4 ?? 0.3 pg g- 1 Os, 1.5 ?? 0.6 pg g- 1 Ir, 6.8 ?? 2.7 pg g- 1 Ru, 16 ?? 15 pg g- 1 Pt, 33 ?? 30 pg g- 1 Pd and 0.29 ?? 0.10 pg g- 1 Re (??? 0.00002 ?? CI) and Re/Os ratios that were modestly elevated (187Re/188Os = 0.6 to 1.7) relative to CI chondrites. MGS samples are, on average, characterised by more elevated HSE abundances (??? 0.00007 ?? CI) compared with FAN. This either reflects contrasting mantle-source HSE characteristics of FAN and MGS rocks, or different mantle-crust HSE fractionation behaviour during production of these lithologies. Previous studies of lunar impact-melt rocks have identified possible elevated Ru and Pd in lunar crustal target rocks. The new results provide no supporting evidence for such enrichments. If maximum estimates for HSE in the lunar mantle are compared with FAN and MGS averages, crust-mantle concentration ratios (D-values) must be ??? 0.3. Such D-values are broadly similar to those estimated for partitioning between the terrestrial crust and upper mantle, with the notable exception of Re. Given the presumably completely different mode of origin for the primary lunar floatation crust and tertiary terrestrial continental crust, the potential similarities in crust-mantle HSE partitioning for the Earth and Moon are somewhat surprising. Low HSE abundances in the lunar crust, coupled with estimates of HSE concentrations in the lunar mantle implies there may be a 'missing component' of late-accreted materials (as much as 95%) to the Moon if the Earth/Moon mass-flux estimates are correct and terrestrial mantle HSE abundances were established by late accretion. ?? 2009 Elsevier B.V. All rights reserved.
Automated main-chain model building by template matching and iterative fragment extension.
Terwilliger, Thomas C
2003-01-01
An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Parallel deterministic neutronics with AMR in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clouse, C.; Ferguson, J.; Hendrickson, C.
1997-12-31
AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.
Refinement of the CALIOP cloud mask algorithm
NASA Astrophysics Data System (ADS)
Katagiri, Shuichiro; Sato, Kaori; Ohta, Kohei; Okamoto, Hajime
2018-04-01
A modified cloud mask algorithm was applied to the CALIOP data to have more ability to detect the clouds in the lower atmosphere. In this algorithm, we also adopt the fully attenuation discrimination and the remain noise estimation using the data obtained at an altitude of 40 km to avoid contamination of stratospheric aerosols. The new cloud mask shows an increase in the lower cloud fraction. Comparison of the results to the data observed with a PML ground observation was also made.
Low-thrust orbit transfer optimization with refined Q-law and multi-objective genetic algorithm
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Petropoulos, Anastassios E.; von Allmen, Paul
2005-01-01
An optimization method for low-thrust orbit transfers around a central body is developed using the Q-law and a multi-objective genetic algorithm. in the hybrid method, the Q-law generates candidate orbit transfers, and the multi-objective genetic algorithm optimizes the Q-law control parameters in order to simultaneously minimize both the consumed propellant mass and flight time of the orbit tranfer. This paper addresses the problem of finding optimal orbit transfers for low-thrust spacecraft.
Solution adaptive grids applied to low Reynolds number flow
NASA Astrophysics Data System (ADS)
de With, G.; Holdø, A. E.; Huld, T. A.
2003-08-01
A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1987-01-01
The major accomplishments of this research are: (1) the refinement and documentation of a multi-input, multi-output modal parameter estimation algorithm which is applicable to general linear, time-invariant dynamic systems; (2) the development and testing of an unsymmetric block-Lanzcos algorithm for reduced-order modeling of linear systems with arbitrary damping; and (3) the development of a control-structure-interaction (CSI) test facility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cololla, P.
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
NASA Astrophysics Data System (ADS)
Vijayakumar, Ganesh; Sprague, Michael
2017-11-01
Demonstrating expected convergence rates with spatial- and temporal-grid refinement is the ``gold standard'' of code and algorithm verification. However, the lack of analytical solutions and generating manufactured solutions presents challenges for verifying codes for complex systems. The application of the method of manufactured solutions (MMS) for verification for coupled multi-physics phenomena like fluid-structure interaction (FSI) has only seen recent investigation. While many FSI algorithms for aeroelastic phenomena have focused on boundary-resolved CFD simulations, the actuator-line representation of the structure is widely used for FSI simulations in wind-energy research. In this work, we demonstrate the verification of an FSI algorithm using MMS for actuator-line CFD simulations with a simplified structural model. We use a manufactured solution for the fluid velocity field and the displacement of the SMD system. We demonstrate the convergence of both the fluid and structural solver to second-order accuracy with grid and time-step refinement. This work was funded by the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Optimal guidance with obstacle avoidance for nap-of-the-earth flight
NASA Technical Reports Server (NTRS)
Pekelsma, Nicholas J.
1988-01-01
The development of automatic guidance is discussed for helicopter Nap-of-the-Earth (NOE) and near-NOE flight. It deals with algorithm refinements relating to automated real-time flight path planning and to mission planning. With regard to path planning, it relates rotorcraft trajectory characteristics to the NOE computation scheme and addresses real-time computing issues and both ride quality issues and pilot-vehicle interfaces. The automated mission planning algorithm refinements include route optimization, automatic waypoint generation, interactive applications, and provisions for integrating the results into the real-time path planning software. A microcomputer based mission planning workstation was developed and is described. Further, the application of Defense Mapping Agency (DMA) digital terrain to both the mission planning workstation and to automatic guidance is both discussed and illustrated.
Marques, Anelise Machado; Tuler, Amélia Carlos; Carvalho, Carlos Roberto; Carrijo, Tatiana Tavares; Ferreira, Marcia Flores da Silva; Clarindo, Wellington Ronildo
2016-01-01
Abstract Euploidy plays an important role in the evolution and diversification of Psidium Linnaeus, 1753. However, few data about the nuclear DNA content, chromosome characterization (morphometry and class) and molecular markers have been reported for this genus. In this context, the present study aims to shed light on the genome of Psidium guineense Swartz, 1788, comparing it with Psidium guajava Linnaeus, 1753. Using flow cytometry, the nuclear 2C value of Psidium guineense was 2C = 1.85 picograms (pg), and the karyotype showed 2n = 4x = 44 chromosomes. Thus, Psidium guineense has four chromosome sets, in accordance with the basic chromosome number of Psidium (x = 11). In addition, karyomorphometric analysis revealed morphologically identical chromosome groups in the karyotype of Psidium guineense. The high transferability of microsatellites (98.6%) further corroborates with phylogenetic relationship between Psidium guajava and Psidium guineense. Based on the data regarding nuclear genome size, karyotype morphometry and molecular markers of Psidium guineense and Psidium guajava (2C = 0.95 pg, 2n = 2x = 22 chromosomes), Psidium guineense is a tetraploid species. These data reveal the role of euploidy in the diversification of the genus Psidium. PMID:27186342
NASA Astrophysics Data System (ADS)
Gao, Simon S.; Liu, Li; Bailey, Steven T.; Flaxel, Christina J.; Huang, David; Li, Dengwang; Jia, Yali
2016-07-01
Quantification of choroidal neovascularization (CNV) as visualized by optical coherence tomography angiography (OCTA) may have importance clinically when diagnosing or tracking disease. Here, we present an automated algorithm to quantify the vessel skeleton of CNV as vessel length. Initial segmentation of the CNV on en face angiograms was achieved using saliency-based detection and thresholding. A level set method was then used to refine vessel edges. Finally, a skeleton algorithm was applied to identify vessel centerlines. The algorithm was tested on nine OCTA scans from participants with CNV and comparisons of the algorithm's output to manual delineation showed good agreement.
Rao, Kollu Nageswara; Ritch, Robert; Dorairaj, Syril K; Kaur, Inderjeet; Liebmann, Jeffrey M; Thomas, Ravi; Chakrabarti, Subhabrata
2008-07-09
Single nucleotide polymorphisms (SNPs) in the LOXL1 gene have been implicated in exfoliation syndrome (XFS) and exfoliation glaucoma (XFG). We have shown that these SNPs are not associated with the primary glaucomas such as primary open-angle (POAG) glaucoma and primary angle-closure glaucoma (PACG). To further establish the specificity of LOXL1 SNPs for XFS and XFG, we determined whether these SNPs were involved in pigment dispersion syndrome (PDS) and pigmentary glaucoma (PG). Three SNPs of LOXL1 (rs1048661, rs3825942, and rs2165241) were screened in a cohort of 78 unrelated and clinically well characterized glaucoma cases comprising of PG (n=44) and PDS (n=34) patients as well as 108 ethnically matched normal controls of Caucasian origin. The criteria for diagnosis of PDS/PG were Krukenberg spindle, hyperpigmentation of the trabecular meshwork, and wide open angle. Transillumination defects were detected by infrared pupillography, and the presence of a Zentmayer ring was considered as a confirmatory sign. All three SNPs were genotyped in cases and controls by resequencing the genomic region of LOXL1 harboring these variants and were further confirmed by polymerase chain reaction (PCR)-based restriction digestions. Haplotypes were generated from the genotype data, and the linkage disequilibrium (LD) and haplotype analysis were done with Haploview software that uses the expectation maximization (EM) algorithm. The LOXL1 SNPs showed no significant association with PDS or PG. There was no significant difference in the frequencies of the risk alleles of rs1048661 ('G' allele; p=0.309), rs3825942 ('G' allele' p=0.461), and rs2165241 ('T' allele; p=0.432) between PG/PDS cases and controls. Similarly, there was no involvement of the XFS/XFG-associated haplotypes, 'G-G' (p=0.643; [OR=1.08, 95%CI, 0.59-1.97]) and 'T-G' (p=0.266; [OR=1.35, 95%CI, 0.70-2.60]), with the PDS/PG phenotypes. The risk haplotype 'G-G' was observed in ~55% of the normal controls. There was no involvement of the LOXL1 SNPs in patients with PDS and PG. The results further indicate that the associations of these SNPs are specific to XFS/XFG.
NASA Astrophysics Data System (ADS)
Kuang, Yubin; Stork, David G.; Kahl, Fredrik
2011-03-01
Underdrawings and pentimenti-typically revealed through x-ray imaging and infrared reflectography-comprise important evidence about the intermediate states of an artwork and thus the working methods of its creator.1 To this end, Shahram, Stork and Donoho introduced the De-pict algorithm, which recovers layers of brush strokes in paintings with open brush work where several layers are partially visible, such as in van Gogh's Self portrait with a grey felt hat.2 While that preliminary work served as a proof of concept that computer image analytic methods could recover some occluded brush strokes, the work needed further refinement before it could be a tool for art scholars. Our current work makes several steps to improve that algorithm. Specifically, we refine the inpainting step through the inclusion of curvature-based constraints, in which a mathematical curvature penalty biases the reconstruction toward matching the artist's smooth hand motion. We refine and test our methods using "ground truth" image data: passages of four layers of brush strokes in which the intermediate layers were recorded photographically. At each successive top layer (currently identified by the user), we used k-means clustering combined with graph cuts to obtain chromatically and spatially coherent segmentation of brush strokes. We then reconstructed strokes at the deeper layer with our new curvature-based inpainting algorithm based on chromatic level lines. Our methods are clearly superior to previous versions of the De-pict algorithm on van Gogh's works giving smoother, natural strokes that more closely match the shapes of unoccluded strokes. Our improved method might be applied to the classic drip paintings of Jackson Pollock, where the drip work is more open and the physics of splashing paint ensures that the curvature more uniform than in the brush strokes of van Gogh.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomov, I; Pember, R; Greenough, J
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert Joseph; Quiroz, Jimmy Edward; Reno, Matthew J.
2015-11-01
The third solicitation of the California Solar Initiative (CSI) Research, Development, Demonstration and Deployment (RD&D) Program established by the California Public Utility Commission (CPUC) is supporting the Electric Power Research Institute (EPRI), National Renewable Energy Laboratory (NREL), and Sandia National Laboratories (SNL) with collaboration from Pacific Gas and Electric (PG&E), Southern California Edison (SCE), and San Diego Gas and Electric (SDG&E), in research to improve the Utility Application Review and Approval process for interconnecting distributed energy resources to the distribution system. Currently this process is the most time - consuming of any step on the path to generating power onmore » the distribution system. This CSI RD&D solicitation three project has completed the tasks of collecting data from the three utilities, clustering feeder characteristic data to attain representative feeders, detailed modeling of 16 representative feeders, analysis of PV impacts to those feeders, refinement of current screening processes, and validation of those suggested refinements. In this report each task is summarized to produce a final summary of all components of the overall project.« less
Refinements to HIRS CO2 Slicing Algorithm with Results Compared to CALIOP and MODIS
NASA Astrophysics Data System (ADS)
Frey, R.; Menzel, P.
2012-12-01
This poster reports on the refinement of a cloud top property algorithm using High-resolution Infrared Radiation Sounder (HIRS) measurements. The HIRS sensor has been flown on fifteen satellites from TIROS-N through NOAA-19 and MetOp-A forming a continuous 30 year cloud data record. Cloud Top Pressure and effective emissivity (cloud fraction multiplied by cloud emissivity) are derived using the 15 μm spectral bands in the CO2 absorption band, implementing the CO2 slicing technique which is strong for high semi-transparent clouds but weak for low clouds with little thermal contrast from clear skies. We report on algorithm adjustments suggested from MODIS cloud record validations and the inclusion of collocated AVHRR cloud fraction data from the PATMOS-x algorithm. Reprocessing results for 2008 are shown using NOAA-18 HIRS and collocated CALIOP data for validation, as well as comparisons to MODIS monthly mean values. Adjustments to the cloud algorithm include (a) using CO2 slicing for all ice and mixed phase clouds and infrared window determinations for all water clouds, (b) determining the cloud top pressure from the most opaque CO2 spectral band pair seeing the cloud, (c) reducing the cloud detection threshold for the CO2 slicing algorithm to include conditions of smaller radiance differences that are often due to thin ice clouds, and (d) identifying stratospheric clouds when an opaque band is warmer than a less opaque band.
An efficient algorithm for global periodic orbits generation near irregular-shaped asteroids
NASA Astrophysics Data System (ADS)
Shang, Haibin; Wu, Xiaoyu; Ren, Yuan; Shan, Jinjun
2017-07-01
Periodic orbits (POs) play an important role in understanding dynamical behaviors around natural celestial bodies. In this study, an efficient algorithm was presented to generate the global POs around irregular-shaped uniformly rotating asteroids. The algorithm was performed in three steps, namely global search, local refinement, and model continuation. First, a mascon model with a low number of particles and optimized mass distribution was constructed to remodel the exterior gravitational potential of the asteroid. Using this model, a multi-start differential evolution enhanced with a deflection strategy with strong global exploration and bypassing abilities was adopted. This algorithm can be regarded as a search engine to find multiple globally optimal regions in which potential POs were located. This was followed by applying a differential correction to locally refine global search solutions and generate the accurate POs in the mascon model in which an analytical Jacobian matrix was derived to improve convergence. Finally, the concept of numerical model continuation was introduced and used to convert the POs from the mascon model into a high-fidelity polyhedron model by sequentially correcting the initial states. The efficiency of the proposed algorithm was substantiated by computing the global POs around an elongated shoe-shaped asteroid 433 Eros. Various global POs with different topological structures in the configuration space were successfully located. Specifically, the proposed algorithm was generic and could be conveniently extended to explore periodic motions in other gravitational systems.
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
2011-01-01
Introduction The purpose of this study was to correlate the level of anabolic and catabolic biomarkers in synovial fluid (SF) from patients with rheumatoid arthritis (RA), patients with osteoarthritis (OA) and asymptomatic organ donors. Methods SF was collected from the knees of 45 OA, 22 RA patients and 20 asymptomatic organ donors. Eight biomarkers were selected and analyzed by using an enzyme-linked immunosorbent assay: interleukin (IL)-1, IL-6, IL-8 and IL-11; leukemia-inhibitory factor (LIF); cartilage oligomeric protein (COMP); osteocalcin; and osteogenic protein 1 (OP-1). Data are expressed as medians (interquartile ranges). The effects of sex and disease activity were assessed on the basis of the Western Ontario and McMaster Universities index score for patients with OA and on the basis of white blood cell count, erythrocyte sedimentation rate and C-reactive protein level for patients with RA. Results The mean ages (± SD) of the patients were as follows: 53 ± 9 years for patients with OA, 54 ± 11 years for patients with RA and 52 ± 7 years for asymptomatic organ donors. No effect of participants' sex was identified. In the SF of patients with RA, four of five cytokines were higher than those in the SF of patients with OA and those of asymptomatic organ donors. The most significant differences were found for IL-6 and IL-8, where IL-6 concentration in SF of patients with RA was almost threefold higher than that in patients with OA and fourfold higher than that in asymptomatic donor controls: 354.7 pg/ml (1,851.6) vs. 119.4 pg/ml (193.2) vs. 86.97 pg/ml (82.0) (P < 0.05 and P < 0.05, respectively). IL-8 concentrations were higher in SF of patients with RA than that in patients with OA as well as that in asymptomatic donor controls: 583.6 pg/ml (1,086.4) vs. 429 pg/ml (87.3) vs. 451 pg/ml (170.1) (P < 0.05 and P < 0.05, respectively). No differences were found for IL-11 in the SF of patients with RA and that of patients with OA, while a 1.4-fold difference was detected in the SF of patients with OA and that of asymptomatic donor controls: 296.2 pg/ml (257.2) vs. 211.6 pg/ml (40.8) (P < 0.05). IL-1 concentrations were the highest in the SF of RA patients (9.26 pg/ml (11.1)); in the SF of asymptomatic donors, it was significantly higher than that in patients with OA (9.083 pg/ml (1.6) vs. 7.76 pg/ml (2.6); P < 0.05). Conversely, asymptomatic donor control samples had the highest LIF concentrations: 228.5 pg/ml (131.6) vs. 128.4 pg/ml (222.7) in the SF of patients with RA vs. 107.5 pg/ml (136.9) in the SF of patients with OA (P < 0.05). OP-1 concentrations were twofold higher in the SF of patients with RA than those in patients with OA and threefold higher than those in asymptomatic donor control samples (167.1 ng/ml (194.8) vs. 81.79 ng/ml (116.0) vs. 54.49 ng/ml (29.3), respectively; P < 0.05). The differences in COMP and osteocalcin were indistinguishable between the groups, as were the differences between active and inactive OA and RA. Conclusions Activation of selected biomarkers corresponds to the mechanisms that drive each disease. IL-11, LIF and OP-1 may be viewed as a cluster of biomarkers significant for OA; while profiling of IL-1, IL-6, IL-8, LIF and OP-1 may be more significant in RA. Larger, better-defined patient cohorts are necessary to develop a biomarker algorithm for prognostic use. PMID:21435227
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
Interactive visual exploration and refinement of cluster assignments.
Kern, Michael; Lex, Alexander; Gehlenborg, Nils; Johnson, Chris R
2017-09-12
With ever-increasing amounts of data produced in biology research, scientists are in need of efficient data analysis methods. Cluster analysis, combined with visualization of the results, is one such method that can be used to make sense of large data volumes. At the same time, cluster analysis is known to be imperfect and depends on the choice of algorithms, parameters, and distance measures. Most clustering algorithms don't properly account for ambiguity in the source data, as records are often assigned to discrete clusters, even if an assignment is unclear. While there are metrics and visualization techniques that allow analysts to compare clusterings or to judge cluster quality, there is no comprehensive method that allows analysts to evaluate, compare, and refine cluster assignments based on the source data, derived scores, and contextual data. In this paper, we introduce a method that explicitly visualizes the quality of cluster assignments, allows comparisons of clustering results and enables analysts to manually curate and refine cluster assignments. Our methods are applicable to matrix data clustered with partitional, hierarchical, and fuzzy clustering algorithms. Furthermore, we enable analysts to explore clustering results in context of other data, for example, to observe whether a clustering of genomic data results in a meaningful differentiation in phenotypes. Our methods are integrated into Caleydo StratomeX, a popular, web-based, disease subtype analysis tool. We show in a usage scenario that our approach can reveal ambiguities in cluster assignments and produce improved clusterings that better differentiate genotypes and phenotypes.
Efficient parallelization for AMR MHD multiphysics calculations; implementation in AstroBEAR
NASA Astrophysics Data System (ADS)
Carroll-Nellenback, Jonathan J.; Shroyer, Brandon; Frank, Adam; Ding, Chen
2013-03-01
Current adaptive mesh refinement (AMR) simulations require algorithms that are highly parallelized and manage memory efficiently. As compute engines grow larger, AMR simulations will require algorithms that achieve new levels of efficient parallelization and memory management. We have attempted to employ new techniques to achieve both of these goals. Patch or grid based AMR often employs ghost cells to decouple the hyperbolic advances of each grid on a given refinement level. This decoupling allows each grid to be advanced independently. In AstroBEAR we utilize this independence by threading the grid advances on each level with preference going to the finer level grids. This allows for global load balancing instead of level by level load balancing and allows for greater parallelization across both physical space and AMR level. Threading of level advances can also improve performance by interleaving communication with computation, especially in deep simulations with many levels of refinement. While we see improvements of up to 30% on deep simulations run on a few cores, the speedup is typically more modest (5-20%) for larger scale simulations. To improve memory management we have employed a distributed tree algorithm that requires processors to only store and communicate local sections of the AMR tree structure with neighboring processors. Using this distributed approach we are able to get reasonable scaling efficiency (>80%) out to 12288 cores and up to 8 levels of AMR - independent of the use of threading.
A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids
NASA Technical Reports Server (NTRS)
deFainchtein, Rosalinda
1996-01-01
This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).
A methodology for quadrilateral finite element mesh coarsening
Staten, Matthew L.; Benzley, Steven; Scott, Michael
2008-03-27
High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less
Etcheson, Jennifer I; Gwam, Chukwuweike U; George, Nicole E; Virani, Sana; Mont, Michael A; Delanois, Ronald E
2018-04-01
Patient perception of care, commonly measured with Press Ganey (PG) surveys, is an important metric used to determine hospital and provider reimbursement. However, post-operative pain following total hip arthroplasty (THA) may negatively affect patient satisfaction. As a result, over-administration of opioids may occur, even without marked evidence of pain. Therefore, this study evaluated whether opioid consumption in the immediate postoperative period bears any influence on satisfaction scores after THA. Specifically, this study assessed the correlation between post-operative opioid consumption and 7 PG domains: (1) Overall hospital rating; (2) Communication with nurses; (3) Responsiveness of hospital staff; (4) Communication with doctors; (5) Hospital environment; (6) Pain Management; and (7) Communication about medicines. Our institutional PG database was reviewed for patients who received THA from 2011 to 2014. A total of 322 patients (mean age = 65 years; 61% female) were analyzed. Patient's opioid consumption was measured using a morphine milli-equivalent conversion algorithm. Bivariate correlation analysis assessed the association between opioid consumption and Press-Ganey survey elements. Pearson's r assessed the strength of the association. No correlation was found between total opioid consumption and Overall hospital rating (r = 0.004; P = .710), Communication with nurses (r = 0.093; P = .425), Responsiveness of hospital staff (r = 0.104; P = .381), Communication with doctors (r = 0.009; P = .940), Hospital environment (r = 0.081; P = .485), and Pain management (r = 0.075; P = .536). However, there was a positive correlation between total opioid consumption and "Communication about medicines" (r = 0.262; P = .043). Our report demonstrates that PG patient satisfaction scores are not influenced by post-operative opioid use, with the exception of PG domain, "Communication about medications." These results suggest that opioid medications should be administered based solely on patient requirements without concern about patient satisfaction survey results. Copyright © 2017 Elsevier Inc. All rights reserved.
1978-12-01
Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Liu, Ji-Zhen
2017-02-01
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch-Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Hai-Rui, E-mail: hrwei@ustb.edu.cn; Liu, Ji-Zhen
2017-02-15
It is very important to seek an efficient and robust quantum algorithm demanding less quantum resources. We propose one-photon three-qubit original and refined Deutsch–Jozsa algorithms with polarization and two linear momentums degrees of freedom (DOFs). Our schemes are constructed by solely using linear optics. Compared to the traditional ones with one DOF, our schemes are more economic and robust because the necessary photons are reduced from three to one. Our linear-optic schemes are working in a determinate way, and they are feasible with current experimental technology.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
QuickProbs 2: Towards rapid construction of high-quality alignments of large protein families
Gudyś, Adam; Deorowicz, Sebastian
2017-01-01
The ever-increasing size of sequence databases caused by the development of high throughput sequencing, poses to multiple alignment algorithms one of the greatest challenges yet. As we show, well-established techniques employed for increasing alignment quality, i.e., refinement and consistency, are ineffective when large protein families are investigated. We present QuickProbs 2, an algorithm for multiple sequence alignment. Based on probabilistic models, equipped with novel column-oriented refinement and selective consistency, it offers outstanding accuracy. When analysing hundreds of sequences, Quick-Probs 2 is noticeably better than ClustalΩ and MAFFT, the previous leaders for processing numerous protein families. In the case of smaller sets, for which consistency-based methods are the best performing, QuickProbs 2 is also superior to the competitors. Due to low computational requirements of selective consistency and utilization of massively parallel architectures, presented algorithm has similar execution times to ClustalΩ, and is orders of magnitude faster than full consistency approaches, like MSAProbs or PicXAA. All these make QuickProbs 2 an excellent tool for aligning families ranging from few, to hundreds of proteins. PMID:28139687
Advances in Patch-Based Adaptive Mesh Refinement Scalability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunney, Brian T.N.; Anderson, Robert W.
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Subband Coding Methods for Seismic Data Compression
NASA Technical Reports Server (NTRS)
Kiely, A.; Pollara, F.
1995-01-01
This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Efficacy determinants of subcutaneous microdose glucagon during closed-loop control.
Russell, Steven J; El-Khatib, Firas H; Nathan, David M; Damiano, Edward R
2010-11-01
During a previous clinical trial of a closed-loop blood glucose (BG) control system that administered insulin and microdose glucagon subcutaneously, glucagon was not uniformly effective in preventing hypoglycemia (BG<70 mg/dl). After a global adjustment of control algorithm parameters used to model insulin absorption and clearance to more closely match insulin pharmacokinetic (PK) parameters observed in the study cohort, administration of glucagon by the control system was more effective in preventing hypoglycemia. We evaluated the role of plasma insulin and plasma glucagon levels in determining whether glucagon was effective in preventing hypoglycemia. We identified and analyzed 36 episodes during which glucagon was given and categorized them as either successful or unsuccessful in preventing hypoglycemia. In 20 of the 36 episodes, glucagon administration prevented hypoglycemia. In the remaining 16, BG fell below 70 mg/dl (12 of the 16 occurred during experiments performed before PK parameters were adjusted). The (dimensionless) levels of plasma insulin (normalized relative to each subject's baseline insulin level) were significantly higher during episodes ending in hypoglycemia (5.2 versus 3.7 times the baseline insulin level, p=.01). The relative error in the control algorithm's online estimate of the instantaneous plasma insulin level was also higher during episodes ending in hypoglycemia (50 versus 30%, p=.003), as were the peak plasma glucagon levels (183 versus 116 pg/ml, p=.007, normal range 50-150 pg/ml) and mean plasma glucagon levels (142 versus 75 pg/ml, p=.02). Relative to mean plasma insulin levels, mean plasma glucagon levels tended to be 59% higher during episodes ending in hypoglycemia, although this result was not found to be statistically significant (p=.14). The rate of BG descent was also significantly greater during episodes ending in hypoglycemia (1.5 versus 1.0 mg/dl/min, p=.02). Microdose glucagon administration was relatively ineffective in preventing hypoglycemia when plasma insulin levels exceeded the controller's online estimate by >60%. After the algorithm PK parameters were globally adjusted, insulin dosing was more conservative and microdose glucagon administration was very effective in reducing hypoglycemia while maintaining normal plasma glucagon levels. Improvements in the accuracy of the controller's online estimate of plasma insulin levels could be achieved if ultrarapid-acting insulin formulations could be developed with faster absorption and less intra- and intersubject variability than the current insulin analogs available today. © 2010 Diabetes Technology Society.
Rossetti, Paolo; Quirós, Carmen; Moscardó, Vanessa; Comas, Anna; Giménez, Marga; Ampudia-Blasco, F Javier; León, Fabián; Montaser, Eslam; Conget, Ignacio; Bondia, Jorge; Vehí, Josep
2017-06-01
Postprandial (PP) control remains a challenge for closed-loop (CL) systems. Few studies with inconsistent results have systematically investigated the PP period. To compare a new CL algorithm with current pump therapy (open loop [OL]) in the PP glucose control in type 1 diabetes (T1D) subjects. A crossover randomized study was performed in two centers. Twenty T1D subjects (F/M 13/7, age 40.7 ± 10.4 years, disease duration 22.6 ± 9.9 years, and A1c 7.8% ± 0.7%) underwent an 8-h mixed meal test on four occasions. In two (CL1/CL2), after meal announcement, a bolus was given followed by an algorithm-driven basal infusion based on continuous glucose monitoring (CGM). Alternatively, in OL1/OL2 conventional pump therapy was used. Main outcome measures were as follows: glucose variability, estimated with the coefficient of variation (CV) of the area under the curve (AUC) of plasma glucose (PG) and CGM values, and from the analysis of the glucose time series; mean, maximum (C max ), and time to C max glucose concentrations and time in range (<70, 70-180, >180 mg/dL). CVs of the glucose AUCs were low and similar in all studies (around 10%). However, CL achieved greater reproducibility and better PG control in the PP period: CL1 = CL2
Eric Rowell; Carl Selelstad; Lee Vierling; Lloyd Queen; Wayne Sheppard
2006-01-01
The success of a local maximum (LM) tree detection algorithm for detecting individual trees from lidar data depends on stand conditions that are often highly variable. A laser height variance and percent canopy cover (PCC) classification is used to segment the landscape by stand condition prior to stem detection. We test the performance of the LM algorithm using canopy...
Automated knot detection with visual post-processing of Douglas-fir veneer images
C.L. Todoroki; Eini C. Lowell; Dennis Dykstra
2010-01-01
Knots on digital images of 51 full veneer sheets, obtained from nine peeler blocks crosscut from two 35-foot (10.7 m) long logs and one 18-foot (5.5 m) log from a single Douglas-fir tree, were detected using a two-phase algorithm. The algorithm was developed using one image, the Development Sheet, refined on five other images, the Training Sheets, and then applied to...
Zhang, Yue; Zou, Huanxin; Luo, Tiancheng; Qin, Xianxiang; Zhou, Shilin; Ji, Kefeng
2016-01-01
The superpixel segmentation algorithm, as a preprocessing technique, should show good performance in fast segmentation speed, accurate boundary adherence and homogeneous regularity. A fast superpixel segmentation algorithm by iterative edge refinement (IER) works well on optical images. However, it may generate poor superpixels for Polarimetric synthetic aperture radar (PolSAR) images due to the influence of strong speckle noise and many small-sized or slim regions. To solve these problems, we utilized a fast revised Wishart distance instead of Euclidean distance in the local relabeling of unstable pixels, and initialized unstable pixels as all the pixels substituted for the initial grid edge pixels in the initialization step. Then, postprocessing with the dissimilarity measure is employed to remove the generated small isolated regions as well as to preserve strong point targets. Finally, the superiority of the proposed algorithm is validated with extensive experiments on four simulated and two real-world PolSAR images from Experimental Synthetic Aperture Radar (ESAR) and Airborne Synthetic Aperture Radar (AirSAR) data sets, which demonstrate that the proposed method shows better performance with respect to several commonly used evaluation measures, even with about nine times higher computational efficiency, as well as fine boundary adherence and strong point targets preservation, compared with three state-of-the-art methods. PMID:27754385
Harte, P.T.; Mack, Thomas J.
1992-01-01
Hydrogeologic data collected since 1990 were assessed and a ground-water-flow model was refined in this study of the Milford-Souhegan glacial-drift aquifer in Milford, New Hampshire. The hydrogeologic data collected were used to refine estimates of hydraulic conductivity and saturated thickness of the aquifer, which were previously calculated during 1988-90. In October 1990, water levels were measured at 124 wells and piezometers, and at 45 stream-seepage sites on the main stem of the Souhegan River, and on small tributary streams overlying the aquifer to improve an understanding of ground-water-flow patterns and stream-seepage gains and losses. Refinement of the ground-water-flow model included a reduction in the number of active cells in layer 2 in the central part of the aquifer, a revision of simulated hydraulic conductivity in model layers 2 and representing the aquifer, incorporation of a new block-centered finite-difference ground-water-flow model, and incorporation of a new solution algorithm and solver (a preconditioned conjugate-gradient algorithm). Refinements to the model resulted in decreases in the difference between calculated and measured heads at 22 wells. The distribution of gains and losses of stream seepage calculated in simulation with the refined model is similar to that calculated in the previous model simulation. The contributing area to the Savage well, under average pumping conditions, decreased by 0.021 square miles from the area calculated in the previous model simulation. The small difference in the contrib- uting recharge area indicates that the additional data did not enhance model simulation and that the conceptual framework for the previous model is accurate.
Maillot, N; Guenancia, C; Yameogo, N V; Gudjoncik, A; Garnier, F; Lorgis, L; Chagué, F; Cottin, Y
2018-02-01
To interpret the electrocardiogram (ECG) of athletes, the recommendations of the ESC and the Seattle criteria define type 1 peculiarities, those induced by training, and type 2, those not induced by training, to rule out cardiomyopathy. The specificity of the screening was improved by Sheikh who defined "Refined Criteria," which includes a group of intermediate peculiarities. The aim of our study was to investigate the influence of static and dynamic components on the prevalence of different types of abnormalities. The ECGs of 1030 athletes performed during preparticipation screening were interpreted using these three classifications. Our work revealed 62/16%, 69/13%, and 71/7% of type 1 peculiarities and type 2 abnormalities for the ESC, Seattle, and Refined Criteria algorithms, respectively(P<.001). For type 2 abnormalities, three independent factors were found for the ESC and Seattle criteria: age, Afro-Caribbean origin, and the dynamic component with, for the latter, an OR[95% CI] of 2.35[1.28-4.33] (P=.006) and 1.90[1.03-3.51] (P=.041), respectively. In contrast, only the Afro-Caribbean origin was associated with type 2 abnormalities using the Refined Criteria: OR[95% CI] 2.67[1.60-4.46] (P<.0001). The Refined Criteria classified more athletes in the type 1 category and fewer in the type 2 category compared with the ESC and Seattle algorithms. Contrary to previous studies, a high dynamic component was not associated with type 2 abnormalities when the Refined Criteria were used; only the Afro-Caribbean origin remained associated. Further research is necessary to better understand adaptations with regard to duration and thus improve the modern criteria for ECG screening in athletes. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Fast digital zooming system using directionally adaptive image interpolation and restoration.
Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki
2014-01-01
This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
TLS from fundamentals to practice
Urzhumtsev, Alexandre; Afonine, Pavel V.; Adams, Paul D.
2014-01-01
The Translation-Libration-Screw-rotation (TLS) model of rigid-body harmonic displacements introduced in crystallography by Schomaker & Trueblood (1968) is now a routine tool in macromolecular studies and is a feature of most modern crystallographic structure refinement packages. In this review we consider a number of simple examples that illustrate important features of the TLS model. Based on these examples simplified formulae are given for several special cases that may occur in structure modeling and refinement. The derivation of general TLS formulae from basic principles is also provided. This manuscript describes the principles of TLS modeling, as well as some select algorithmic details for practical application. An extensive list of applications references as examples of TLS in macromolecular crystallography refinement is provided. PMID:25249713
NASA Astrophysics Data System (ADS)
Gallimore, P. J.; Griffiths, P. T.; Pope, F. D.; Reid, J. P.; Kalberer, M.
2017-04-01
The chemical composition of organic aerosols profoundly influences their atmospheric properties, but a detailed understanding of heterogeneous and in-particle reactivity is lacking. We present here a combined experimental and modeling study of the ozonolysis of oleic acid particles. An online mass spectrometry (MS) method, Extractive Electrospray Ionization (EESI), is used to follow the composition of the aerosol at a molecular level in real time; relative changes in the concentrations of both reactants and products are determined during aerosol aging. The results show evidence for multiple non-first-order reactions involving stabilized Criegee intermediates, including the formation of secondary ozonides and other oligomers. Offline liquid chromatography MS is used to confirm the online MS assignment of the monomeric and dimeric products. We explain the observed EESI-MS chemical composition changes, and chemical and physical data from previous studies, using a process-based aerosol chemistry simulation, the Pretty Good Aerosol Model (PG-AM). In particular, we extend previous studies of reactant loss by demonstrating success in reproducing the time dependence of product formation and the evolving particle size. This advance requires a comprehensive chemical scheme coupled to the partitioning of semivolatile products; relevant reaction and evaporation parameters have been refined using our new measurements in combination with PG-AM.
New Force Field Model for Propylene Glycol: Insight to Local Structure and Dynamics.
Ferreira, Elisabete S C; Voroshylova, Iuliia V; Koverga, Volodymyr A; Pereira, Carlos M; Cordeiro, M Natália D S
2017-12-07
In this work we developed a new force field model (FFM) for propylene glycol (PG) based on the OPLS all-atom potential. The OPLS potential was refined using quantum chemical calculations, taking into account the densities and self-diffusion coefficients. The validation of this new FFM was carried out based on a wide range of physicochemical properties, such as density, enthalpy of vaporization, self-diffusion coefficients, isothermal compressibility, surface tension, and shear viscosity. The molecular dynamics (MD) simulations were performed over a large range of temperatures (293.15-373.15 K). The comparison with other force field models, such as OPLS, CHARMM27, and GAFF, revealed a large improvement of the results, allowing a better agreement with experimental data. Specific structural properties (radial distribution functions, hydrogen bonding and spatial distribution functions) were then analyzed in order to support the adequacy of the proposed FFM. Pure propylene glycol forms a continuous phase, displaying no microstructures. It is shown that the developed FFM gives rise to suitable results not only for pure propylene glycol but also for mixtures by testing its behavior for a 50 mol % aqueous propylene glycol solution. Furthermore, it is demonstrated that the addition of water to the PG phase produces a homogeneous solution and that the hydration interactions prevail over the propylene glycol self-association interactions.
NASA Astrophysics Data System (ADS)
Miller, N. C.; Lizarralde, D.; McGuire, J.; Hole, J. A.
2006-12-01
We consider methodologies, including survey design and processing algorithms, which are best suited to imaging vertical reflectors in oceanic crust using marine seismic techniques. The ability to image the reflectivity structure of transform faults as a function of depth, for example, may provide new insights into what controls seismicity along these plate boundaries. Turning-wave migration has been used with success to image vertical faults on land. With synthetic datasets we find that this approach has unique difficulties in the deep ocean. The fault-reflected crustal refraction phase (Pg-r) typically used in pre-stack migrations is difficult to isolate in marine seismic data. An "imagable" Pg-r is only observed in a time window between the first arrivals and arrivals from the sediments and the thick, slow water layer at offsets beyond ~25 km. Ocean- bottom seismometers (OBSs), as opposed to a long surface streamer, must be used to acquire data suitable for crustal-scale vertical imaging. The critical distance for Moho reflections (PmP) in oceanic crust is also ~25 km, thus Pg-r and PmP-r are observed with very little separation, and the fault-reflected mantle refraction (Pn-r) arrives prior to Pg-r as the observation window opens with increased OBS-to-fault distance. This situation presents difficulties for "first-arrival" based Kirchoff migration approaches and suggests that wave- equation approaches, which in theory can image all three phases simultaneously, may be more suitable for vertical imaging in oceanic crust. We will present a comparison of these approaches as applied to a synthetic dataset generated from realistic, stochastic velocity models. We will assess their suitability, the migration artifacts unique to the deep ocean, and the ideal instrument layout for such an experiment.
Tomography of Pg and Sg Across the Western United States Using USArray Data
NASA Astrophysics Data System (ADS)
Steck, L.; Phillips, W. S.; Begnaud, M. L.; Stead, R.
2009-12-01
In this paper we explore the use of Pg and Sg for determining crustal structure in the western United States. Seismic data used in the study come from USArray, along with local and regional networks in the region. To invert the travel times for velocity structure we use the LSQR algorithm assuming a great circle arc path between source and receiver. First difference smoothing is used to regularize the model and we calculate station and event terms. For Pg we have about 160,000 arrivals from 30,000 events reporting at 1500 stations. If we trim data based on an epicentral ground truth level of 25 km or better, we have 53000 arrivals, 5000 events and 1300 stations. Data density is such that grids of 0.5 deg or better are possible. Velocity results show good correlation with tectonic provinces. We find fast velocities beneath the Snake River Plain, coastal Washington State, and for the coast ranges of California south of Point Reyes. Low velocities are observed on the border between Idaho and Montana, and in the Basin and Range of eastern Nevada, southeastern California, and southern Arizona. For Sg we have 48,813 arrivals for 13,548 events at 1052 stations, not filtering by ground truth level. Excellent coverage allows grids to 0.5 deg or lower. Prominent features of this model include high velocities in the Snake River Plain, Colorado Plateau, and the Cascades and Sierra Nevada. Low velocities are found in Southern California, the Basin and Range, and the Columbia Plateau. Root-mean-square residual reductions are 34% for Pg and 41% for Sg.
Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E
2009-01-01
Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less
Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior
NASA Astrophysics Data System (ADS)
Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique
2015-09-01
A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
A View from Above Without Leaving the Ground
NASA Technical Reports Server (NTRS)
2004-01-01
In order to deliver accurate geospatial data and imagery to the remote sensing community, NASA is constantly developing new image-processing algorithms while refining existing ones for technical improvement. For 8 years, the NASA Regional Applications Center at Florida International University has served as a test bed for implementing and validating many of these algorithms, helping the Space Program to fulfill its strategic and educational goals in the area of remote sensing. The algorithms in return have helped the NASA Regional Applications Center develop comprehensive semantic database systems for data management, as well as new tools for disseminating geospatial information via the Internet.
NASA GPM GV Science Implementation
NASA Technical Reports Server (NTRS)
Petersen, W. A.
2009-01-01
Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
1977-09-01
to state as successive input bits are brought into the encoder. We can more easily follow our progress on the equivalent lattice diagram where...Pg.Pj.. STATE DIAGRAM INPUT PATH i ,i.,i ,L.. = 1001 1’ 2’’ V Fig. 12. Convolutional Encoder, State Diagram and Lattice . 39 represented...and can in fact be traced. The Viterbi algorithm can be simply described with the aid of this lattice . Note that the nodes of the lattice represent
Department of the Navy Supporting Data for FY1991 Budget Estimates Descriptive Summaries
1990-01-01
deployments of the F/A-18 aircraft. f. (U) Engineering and technical support for AAS-38 tracker and F/A-18 C/D WSSA. g. (U) Provided support to ATARS program...for preliminary testing of RECCE/ ATARS common nose and associated air data computer (ADC) algorithms. h. (U) Initiated integration of full HARPOON and...to ATARS program for testing of flight control computer software. 28 UNCLASSIFIED 0]PgLa .e 2 4136N Budget Activity: 4 Program Elemhk* Title: F/A-18
Preliminary Design of an Autonomous Amphibious System
2016-09-01
changing vehicle dynamics will require innovative new autonomy algorithms. The developed software architecture, drive-by- wire kit, and supporting...COMMUNICATIONS ARCHITECTURE .................................................12 3.3 DRIVE-BY- WIRE DESIGN...SOFTWARE MATURATION PLANS ......................................................17 4.2 DRIVE-BY- WIRE PLANNED REFINEMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dekker, A.G.; Hoogenboom, H.J.; Rijkeboer, M.
1997-06-01
Deriving thematic maps of water quality parameters from a remote sensing image requires a number of processing steps, such as calibration, atmospheric correction, air/water interface correction, and application of water quality algorithms. A prototype software environment has recently been developed that enables the user to perform and control these processing steps. Main parts of this environment are: (i) access to the MODTRAN 3 radiative transfer code for removing atmospheric and air-water interface influences, (ii) a tool for analyzing of algorithms for estimating water quality and (iii) a spectral database, containing apparent and inherent optical properties and associated water quality parameters.more » The use of the software is illustrated by applying implemented algorithms for estimating chlorophyll to data from a spectral library of Dutch inland waters with CHL ranging from 1 to 500 pg 1{sup -1}. The algorithms currently implemented in the Toolkit software are recommended for optically simple waters, but for optically complex waters development of more advanced retrieval methods is required.« less
i3Drefine software for protein 3D structure refinement and its assessment in CASP10.
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8(th) CASP experiment. During the 9(th) and recently concluded 10(th) CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as 'MULTICOM-CONSTRUCT') was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/.
NASA Astrophysics Data System (ADS)
Hu, Zixi; Yao, Zhewei; Li, Jinglai
2017-03-01
Many scientific and engineering problems require to perform Bayesian inference for unknowns of infinite dimension. In such problems, many standard Markov Chain Monte Carlo (MCMC) algorithms become arbitrary slow under the mesh refinement, which is referred to as being dimension dependent. To this end, a family of dimensional independent MCMC algorithms, known as the preconditioned Crank-Nicolson (pCN) methods, were proposed to sample the infinite dimensional parameters. In this work we develop an adaptive version of the pCN algorithm, where the covariance operator of the proposal distribution is adjusted based on sampling history to improve the simulation efficiency. We show that the proposed algorithm satisfies an important ergodicity condition under some mild assumptions. Finally we provide numerical examples to demonstrate the performance of the proposed method.
Collaborative Localization and Location Verification in WSNs
Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang
2015-01-01
Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948
Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.
Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya
2018-04-04
Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
NASA Technical Reports Server (NTRS)
Stramski, Dariusz; Stramska, Malgorzata; Starr, David OC. (Technical Monitor)
2002-01-01
The overall goal of this project was to validate and refine ocean color algorithms at high latitudes in the north polar region of the Atlantic. The specific objectives were defined as follows: (1) to identify and quantify errors in the satellite-derived water-leaving radiances and chlorophyll concentration; (2) to develop understanding of these errors; and (3) to improve in-water ocean color algorithms for retrieving chlorophyll concentration in the investigated region.
NASA Technical Reports Server (NTRS)
Baker, A. J.
1974-01-01
The finite-element method is used to establish a numerical solution algorithm for the Navier-Stokes equations for two-dimensional flows of a viscous compressible fluid. Numerical experiments confirm the advection property for the finite-element equivalent of the nonlinear convection term for both unidirectional and recirculating flowfields. For linear functionals, the algorithm demonstrates good accuracy using coarse discretizations and h squared convergence with discretization refinement.
Quantitative Structure Retention Relationships of Polychlorinated Dibenzodioxins and Dibenzofurans
1991-08-01
be a projection onto the X-Y plane. The algorithm for this calculation can be found in Stouch and Jurs (22), but was further refined by Rohrbaugh and...throughspace distances. WPSA2 (c) Weighted positive charged surface area. MOMH2 (c) Second major moment of inertia with hydrogens attached. CSTR 3 (d) Sum...of the models. The robust regression analysis method calculates a regression model using a least median squares algorithm which is not as susceptible
Motion compensation for ultra wide band SAR
NASA Technical Reports Server (NTRS)
Madsen, S.
2001-01-01
This paper describes an algorithm that combines wavenumber domain processing with a procedure that enables motion compensation to be applied as a function of target range and azimuth angle. First, data are processed with nominal motion compensation applied, partially focusing the image, then the motion compensation of individual subpatches is refined. The results show that the proposed algorithm is effective in compensating for deviations from a straight flight path, from both a performance and a computational efficiency point of view.
NASA Astrophysics Data System (ADS)
Pioldi, Fabio; Ferrari, Rosalba; Rizzi, Egidio
2016-02-01
The present paper deals with the seismic modal dynamic identification of frame structures by a refined Frequency Domain Decomposition (rFDD) algorithm, autonomously formulated and implemented within MATLAB. First, the output-only identification technique is outlined analytically and then employed to characterize all modal properties. Synthetic response signals generated prior to the dynamic identification are adopted as input channels, in view of assessing a necessary condition for the procedure's efficiency. Initially, the algorithm is verified on canonical input from random excitation. Then, modal identification has been attempted successfully at given seismic input, taken as base excitation, including both strong motion data and single and multiple input ground motions. Rather than different attempts investigating the role of seismic response signals in the Time Domain, this paper considers the identification analysis in the Frequency Domain. Results turn-out very much consistent with the target values, with quite limited errors in the modal estimates, including for the damping ratios, ranging from values in the order of 1% to 10%. Either seismic excitation and high values of damping, resulting critical also in case of well-spaced modes, shall not fulfill traditional FFD assumptions: this shows the consistency of the developed algorithm. Through original strategies and arrangements, the paper shows that a comprehensive rFDD modal dynamic identification of frames at seismic input is feasible, also at concomitant high damping.
NASA Astrophysics Data System (ADS)
Nanayakkara, Nuwan D.; Samarabandu, Jagath; Fenster, Aaron
2006-04-01
Estimation of prostate location and volume is essential in determining a dose plan for ultrasound-guided brachytherapy, a common prostate cancer treatment. However, manual segmentation is difficult, time consuming and prone to variability. In this paper, we present a semi-automatic discrete dynamic contour (DDC) model based image segmentation algorithm, which effectively combines a multi-resolution model refinement procedure together with the domain knowledge of the image class. The segmentation begins on a low-resolution image by defining a closed DDC model by the user. This contour model is then deformed progressively towards higher resolution images. We use a combination of a domain knowledge based fuzzy inference system (FIS) and a set of adaptive region based operators to enhance the edges of interest and to govern the model refinement using a DDC model. The automatic vertex relocation process, embedded into the algorithm, relocates deviated contour points back onto the actual prostate boundary, eliminating the need of user interaction after initialization. The accuracy of the prostate boundary produced by the proposed algorithm was evaluated by comparing it with a manually outlined contour by an expert observer. We used this algorithm to segment the prostate boundary in 114 2D transrectal ultrasound (TRUS) images of six patients scheduled for brachytherapy. The mean distance between the contours produced by the proposed algorithm and the manual outlines was 2.70 ± 0.51 pixels (0.54 ± 0.10 mm). We also showed that the algorithm is insensitive to variations of the initial model and parameter values, thus increasing the accuracy and reproducibility of the resulting boundaries in the presence of noise and artefacts.
SMAP validation of soil moisture products
USDA-ARS?s Scientific Manuscript database
The Soil Moisture Active Passive (SMAP) satellite will be launched by the National Aeronautics and Space Administration in October 2014. SMAP will also incorporate a rigorous calibration and validation program that will support algorithm refinement and provide users with information on the accuracy ...
Redundant Coding in Visual Search Displays: Effects of Shape and Colour.
1997-02-01
results for refining color selection algorithms and for color coding in situations where the gamut of available colors is limited. In a secondary set of analyses, we note large performance differences as a function of target shape.
46 CFR 52.01-100 - Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55). 52.01-100 Section 52.01-100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-100 Openings and compensation (modifies PG-32 through PG-39, PG-42...
46 CFR 52.01-100 - Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55). 52.01-100 Section 52.01-100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-100 Openings and compensation (modifies PG-32 through PG-39, PG-42...
46 CFR 52.01-100 - Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55). 52.01-100 Section 52.01-100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-100 Openings and compensation (modifies PG-32 through PG-39, PG-42...
46 CFR 52.01-100 - Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55). 52.01-100 Section 52.01-100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-100 Openings and compensation (modifies PG-32 through PG-39, PG-42...
46 CFR 52.01-100 - Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Openings and compensation (modifies PG-32 through PG-39, PG-42 through PG-55). 52.01-100 Section 52.01-100 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-100 Openings and compensation (modifies PG-32 through PG-39, PG-42...
Fierro, S; Viñoles, C; Olivera-Muzante, J
2016-04-01
To determine estrous, ovarian and reproductive responses after different prostaglandin (PG)-based protocols, ewes were assigned to groups PG10, PG12, PG14 or PG16 (twoPG injections administered 10, 12, 14 or 16 days apart; respectively). Experiment I (n=132) was conducted to evaluate the estrous response, ovulation rate (OR), conception and fertility. Experiment II (n=24) was conducted to evaluate ovarian follicle growth, steroid concentrations and the interval from the second PG injection to estrus (PG-estrus) and ovulation (PG-ovulation). Estrous response was less with the PG16 (P<0.05) treatment, and the extent of estrous synchrony was greater with the PG10 and PG12 treatments. Ovarian follicle growth and the intervals for the variables PG-estrus, PG-ovulation and OR were similar among groups (P>0.05). From 8 to 4 days before estrus, progesterone (P4) concentrations were greater for the PG14 and PG16 than for the PG10 and PG12 (P<0.05) groups. There were more days where concentrations of P4 were above 3.18 nmol/L with the PG14 and PG16 than PG10 and PG12 (P<0.05) treatments. Use of the PG14 and PG16 treatments resulted in greater estradiol (E2) at estrus and 12h later than use of the PG10 and PG12 treatments. A positive correlation was observed between the duration of the luteal phase and maximum E2 concentrations, and between duration of the luteal phase and days with E2 concentrations above 10 pmol/L. Conception and fertility were greater with use of the PG14 compared with PG10 and PG12 (P<0.05) treatments. The administration of two PG injections 10, 12, 14 or 16 days apart resulted in different durations of the luteal phase that were positively associated with E2 concentrations and the reproductive outcome. The shorter luteal phases were associated with greater synchrony in time of estrus. The intervals for the variables PG-estrus, PG-ovulation and OR were similar among groups. Copyright © 2016 Elsevier B.V. All rights reserved.
Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods
NASA Astrophysics Data System (ADS)
Koreň, Milan; Mokroš, Martin; Bucha, Tomáš
2017-12-01
This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario
NASA Astrophysics Data System (ADS)
Moscadelli, M.; Diani, M.; Corsini, G.
2017-10-01
In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.
NASA Astrophysics Data System (ADS)
Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren
2017-08-01
Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
[Research on non-rigid registration of multi-modal medical image based on Demons algorithm].
Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang
2014-02-01
Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.
Capkun, Gorana; Lahoz, Raquel; Verdun, Elisabetta; Song, Xue; Chen, Weston; Korn, Jonathan R; Dahlke, Frank; Freitas, Rita; Fraeman, Kathy; Simeone, Jason; Johnson, Barbara H; Nordstrom, Beth
2015-05-01
Administrative claims databases provide a wealth of data for assessing the effect of treatments in clinical practice. Our aim was to propose methodology for real-world studies in multiple sclerosis (MS) using these databases. In three large US administrative claims databases: MarketScan, PharMetrics Plus and Department of Defense (DoD), patients with MS were selected using an algorithm identified in the published literature and refined for accuracy. Algorithms for detecting newly diagnosed ('incident') MS cases were also refined and tested. Methodology based on resource and treatment use was developed to differentiate between relapses with and without hospitalization. When various patient selection criteria were applied to the MarketScan database, an algorithm requiring two MS diagnoses at least 30 days apart was identified as the preferred method of selecting patient cohorts. Attempts to detect incident MS cases were confounded by the limited continuous enrollment of patients in these databases. Relapse detection algorithms identified similar proportions of patients in the MarketScan and PharMetrics Plus databases experiencing relapses with (2% in both databases) and without (15-20%) hospitalization in the 1 year follow-up period, providing findings in the range of those in the published literature. Additional validation of the algorithms proposed here would increase their credibility. The methods suggested in this study offer a good foundation for performing real-world research in MS using administrative claims databases, potentially allowing evidence from different studies to be compared and combined more systematically than in current research practice.
NASA Astrophysics Data System (ADS)
Rybakin, B.; Bogatencov, P.; Secrieru, G.; Iliuha, N.
2013-10-01
The paper deals with a parallel algorithm for calculations on multiprocessor computers and GPU accelerators. The calculations of shock waves interaction with low-density bubble results and the problem of the gas flow with the forces of gravity are presented. This algorithm combines a possibility to capture a high resolution of shock waves, the second-order accuracy for TVD schemes, and a possibility to observe a low-level diffusion of the advection scheme. Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems it is reasonable to use the method of Adaptive Mesh Refinement. That is, the grid refinement is performed only in the areas of interest of the structure, where, e.g., the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. Proposed algorithm is based on the AMR method. Utilization of AMR method can significantly improve the resolution of the difference grid in areas of high interest, and from other side to accelerate the processes of the multi-dimensional problems calculating. Parallel algorithms of the analyzed difference models realized for the purpose of calculations on graphic processors using the CUDA technology [1].
Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement
NASA Astrophysics Data System (ADS)
Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano
2018-03-01
The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.
Vanishing Point Extraction and Refinement for Robust Camera Calibration
Tsai, Fuan
2017-01-01
This paper describes a flexible camera calibration method using refined vanishing points without prior information. Vanishing points are estimated from human-made features like parallel lines and repeated patterns. With the vanishing points extracted from the three mutually orthogonal directions, the interior and exterior orientation parameters can be further calculated using collinearity condition equations. A vanishing point refinement process is proposed to reduce the uncertainty caused by vanishing point localization errors. The fine-tuning algorithm is based on the divergence of grouped feature points projected onto the reference plane, minimizing the standard deviation of each of the grouped collinear points with an O(1) computational complexity. This paper also presents an automated vanishing point estimation approach based on the cascade Hough transform. The experiment results indicate that the vanishing point refinement process can significantly improve camera calibration parameters and the root mean square error (RMSE) of the constructed 3D model can be reduced by about 30%. PMID:29280966
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
An adaptive interpolation scheme for molecular potential energy surfaces
NASA Astrophysics Data System (ADS)
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Bayesian ensemble refinement by replica simulations and reweighting.
Hummer, Gerhard; Köfinger, Jürgen
2015-12-28
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Bayesian ensemble refinement by replica simulations and reweighting
NASA Astrophysics Data System (ADS)
Hummer, Gerhard; Köfinger, Jürgen
2015-12-01
We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.
Parallel-Vector Algorithm For Rapid Structural Anlysis
NASA Technical Reports Server (NTRS)
Agarwal, Tarun R.; Nguyen, Duc T.; Storaasli, Olaf O.
1993-01-01
New algorithm developed to overcome deficiency of skyline storage scheme by use of variable-band storage scheme. Exploits both parallel and vector capabilities of modern high-performance computers. Gives engineers and designers opportunity to include more design variables and constraints during optimization of structures. Enables use of more refined finite-element meshes to obtain improved understanding of complex behaviors of aerospace structures leading to better, safer designs. Not only attractive for current supercomputers but also for next generation of shared-memory supercomputers.
2013-03-01
Räisänen. An efficient FDTD algorithm for the analysis of microstrip patch antennas printed on a general anisotropic dielectric substrate. IEEE...applications [3, 21, 22], including antenna , microwave circuits, geophysics, optics, etc. The Ground Penetrating Radar (GPR) is a popular and...IEEE Trans. Antennas Propag., 41:994–999, 1993. 16 [6] S. G. Garcia, T. M. Hung-Bao, R. G. Martin, and B. G. Olmedo. On the application of finite
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
NASA Technical Reports Server (NTRS)
Kitzis, J. L.; Kitzis, S. N.
1979-01-01
The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.
Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavel Kus; Pavel Solin; David Andrs
2014-11-01
In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
PHYTOPLANKTON AND BIOMASS DISTRIBUTION AT POTENTIAL OTEC SITES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, P.W.; Horne, A.J.
1979-06-01
Net or large phytoplankton species composition and most phytoplankton abundance was measured at three OTEC sites. In the Gulf of Mexico and Hawaii, diatoms dominated while the blue-green algae Trichodesmium was most common at Puerto Rico. The species ratio of diatoms to dinoflagellates was approximately 1:1. The species diversity varied from site to site, Hawaii > Puerto Rico > Gulf of Mexico. Chlorophyll a, which is a measure of the pigment of all algae size ranges, showed a subsurface peak of 0.14-0.4 g per liter at 75 to 125 m. Occasional surface peaks up to 0.4 pg per liter occurred.more » Further refinement of collection techniques is needed to delineate the subtle environmental effects expected by OTEC plant discharges.« less
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
Millán, Claudia; Sammito, Massimo Domenico; McCoy, Airlie J; Nascimento, Andrey F Ziem; Petrillo, Giovanna; Oeffner, Robert D; Domínguez-Gil, Teresa; Hermoso, Juan A; Read, Randy J; Usón, Isabel
2018-04-01
Macromolecular structures can be solved by molecular replacement provided that suitable search models are available. Models from distant homologues may deviate too much from the target structure to succeed, notwithstanding an overall similar fold or even their featuring areas of very close geometry. Successful methods to make the most of such templates usually rely on the degree of conservation to select and improve search models. ARCIMBOLDO_SHREDDER uses fragments derived from distant homologues in a brute-force approach driven by the experimental data, instead of by sequence similarity. The new algorithms implemented in ARCIMBOLDO_SHREDDER are described in detail, illustrating its characteristic aspects in the solution of new and test structures. In an advance from the previously published algorithm, which was based on omitting or extracting contiguous polypeptide spans, model generation now uses three-dimensional volumes respecting structural units. The optimal fragment size is estimated from the expected log-likelihood gain (LLG) values computed assuming that a substructure can be found with a level of accuracy near that required for successful extension of the structure, typically below 0.6 Å root-mean-square deviation (r.m.s.d.) from the target. Better sampling is attempted through model trimming or decomposition into rigid groups and optimization through Phaser's gyre refinement. Also, after model translation, packing filtering and refinement, models are either disassembled into predetermined rigid groups and refined (gimble refinement) or Phaser's LLG-guided pruning is used to trim the model of residues that are not contributing signal to the LLG at the target r.m.s.d. value. Phase combination among consistent partial solutions is performed in reciprocal space with ALIXE. Finally, density modification and main-chain autotracing in SHELXE serve to expand to the full structure and identify successful solutions. The performance on test data and the solution of new structures are described.
Navigation Algorithms for the SeaWiFS Mission
NASA Technical Reports Server (NTRS)
Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Patt, Frederick S.; McClain, Charles R. (Technical Monitor)
2002-01-01
The navigation algorithms for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) were designed to meet the requirement of 1-pixel accuracy-a standard deviation (sigma) of 2. The objective has been to extract the best possible accuracy from the spacecraft telemetry and avoid the need for costly manual renavigation or geometric rectification. The requirement is addressed by postprocessing of both the Global Positioning System (GPS) receiver and Attitude Control System (ACS) data in the spacecraft telemetry stream. The navigation algorithms described are separated into four areas: orbit processing, attitude sensor processing, attitude determination, and final navigation processing. There has been substantial modification during the mission of the attitude determination and attitude sensor processing algorithms. For the former, the basic approach was completely changed during the first year of the mission, from a single-frame deterministic method to a Kalman smoother. This was done for several reasons: a) to improve the overall accuracy of the attitude determination, particularly near the sub-solar point; b) to reduce discontinuities; c) to support the single-ACS-string spacecraft operation that was started after the first mission year, which causes gaps in attitude sensor coverage; and d) to handle data quality problems (which became evident after launch) in the direct-broadcast data. The changes to the attitude sensor processing algorithms primarily involved the development of a model for the Earth horizon height, also needed for single-string operation; the incorporation of improved sensor calibration data; and improved data quality checking and smoothing to handle the data quality issues. The attitude sensor alignments have also been revised multiple times, generally in conjunction with the other changes. The orbit and final navigation processing algorithms have remained largely unchanged during the mission, aside from refinements to data quality checking. Although further improvements are certainly possible, future evolution of the algorithms is expected to be limited to refinements of the methods presented here, and no substantial changes are anticipated.
DOT National Transportation Integrated Search
2008-10-01
The FHWA has strongly encouraged transportation departments to display travel times on their Dynamic Message Signs (DMS). The Oregon : Department of Transportation (ODOT) currently displays travel time estimates on three DMSs in the Portland metropol...
46 CFR 52.01-95 - Design (modifies PG-16 through PG-31 and PG-100).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Design (modifies PG-16 through PG-31 and PG-100). 52.01-95 Section 52.01-95 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-95 Design (modifies PG-16 through PG-31 and PG-100). (a) Requirements. Boilers required to be designe...
46 CFR 52.01-95 - Design (modifies PG-16 through PG-31 and PG-100).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Design (modifies PG-16 through PG-31 and PG-100). 52.01-95 Section 52.01-95 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-95 Design (modifies PG-16 through PG-31 and PG-100). (a) Requirements. Boilers required to be designe...
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.; Gaitley, Barbara J.; Martonchik, John V.; Diner, David J.; Crean, Kathleen A.; Holben, Brent
2005-01-01
Performance of the Multiangle Imaging Spectroradiometer (MISR) early postlaunch aerosol optical thickness (AOT) retrieval algorithm is assessed quantitatively over land and ocean by comparison with a 2-year measurement record of globally distributed AERONET Sun photometers. There are sufficient coincident observations to stratify the data set by season and expected aerosol type. In addition to reporting uncertainty envelopes, we identify trends and outliers, and investigate their likely causes, with the aim of refining algorithm performance. Overall, about 2/3 of the MISR-retrieved AOT values fall within [0.05 or 20% x AOT] of Aerosol Robotic Network (AERONET). More than a third are within [0.03 or 10% x AOT]. Correlation coefficients are highest for maritime stations (approx.0.9), and lowest for dusty sites (more than approx.0.7). Retrieved spectral slopes closely match Sun photometer values for Biomass burning and continental aerosol types. Detailed comparisons suggest that adding to the algorithm climatology more absorbing spherical particles, more realistic dust analogs, and a richer selection of multimodal aerosol mixtures would reduce the remaining discrepancies for MISR retrievals over land; in addition, refining instrument low-light-level calibration could reduce or eliminate a small but systematic offset in maritime AOT values. On the basis of cases for which current particle models are representative, a second-generation MISR aerosol retrieval algorithm incorporating these improvements could provide AOT accuracy unprecedented for a spaceborne technique.
A novel algorithm for validating peptide identification from a shotgun proteomics search engine.
Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J
2013-03-01
Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.
NASA Technical Reports Server (NTRS)
Chen, Wei-Ting; Kahn, Ralph A.; Nelson, David; Yau, Kevin; Seinfeld, John H.
2008-01-01
The treatment of biomass burning (BB) carbonaceous particles in the Multiangle Imaging SpectroRadiometer (MISR) Standard Aerosol Retrieval Algorithm is assessed, and algorithm refinements are suggested, based on a theoretical sensitivity analysis and comparisons with near-coincident AERONET measurements at representative BB sites. Over the natural ranges of BB aerosol microphysical and optical properties observed in past field campaigns, patterns of retrieved Aerosol Optical Depth (AOD), particle size, and single scattering albedo (SSA) are evaluated. On the basis of the theoretical analysis, assuming total column AOD of 0.2, over a dark, uniform surface, MISR can distinguish two to three groups in each of size and SSA, except when the assumed atmospheric particles are significantly absorbing (mid-visible SSA approx.0.84), or of medium sizes (mean radius approx.0.13 pin); sensitivity to absorbing, medium-large size particles increases considerably when the assumed column AOD is raised to 0.5. MISR Research Aerosol Retrievals confirm the theoretical results, based on coincident AERONET inversions under BB-dominated conditions. When BB is externally mixed with dust in the atmosphere, dust optical model and surface reflection uncertainties, along with spatial variability, contribute to differences between the Research Retrievals and AERONET. These results suggest specific refinements to the MISR Standard Aerosol Algorithm complement of component particles and mixtures. They also highlight the importance for satellite aerosol retrievals of surface reflectance characterization, with accuracies that can be difficult to achieve with coupled surface-aerosol algorithms in some higher AOD situations.
Robust Kalman filter design for predictive wind shear detection
NASA Technical Reports Server (NTRS)
Stratton, Alexander D.; Stengel, Robert F.
1991-01-01
Severe, low-altitude wind shear is a threat to aviation safety. Airborne sensors under development measure the radial component of wind along a line directly in front of an aircraft. In this paper, optimal estimation theory is used to define a detection algorithm to warn of hazardous wind shear from these sensors. To achieve robustness, a wind shear detection algorithm must distinguish threatening wind shear from less hazardous gustiness, despite variations in wind shear structure. This paper presents statistical analysis methods to refine wind shear detection algorithm robustness. Computational methods predict the ability to warn of severe wind shear and avoid false warning. Comparative capability of the detection algorithm as a function of its design parameters is determined, identifying designs that provide robust detection of severe wind shear.
Electro-optic tracking R&D for defense surveillance
NASA Astrophysics Data System (ADS)
Sutherland, Stuart; Woodruff, Chris J.
1995-09-01
Two aspects of work on automatic target detection and tracking for electro-optic (EO) surveillance are described. Firstly, a detection and tracking algorithm test-bed developed by DSTO and running on a PC under Windows NT is being used to assess candidate algorithms for unresolved and minimally resolved target detection. The structure of this test-bed is described and examples are given of its user interfaces and outputs. Secondly, a development by Australian industry under a Defence-funded contract, of a reconfigurable generic track processor (GTP) is outlined. The GTP will include reconfigurable image processing stages and target tracking algorithms. It will be used to demonstrate to the Australian Defence Force automatic detection and tracking capabilities, and to serve as a hardware base for real time algorithm refinement.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning
Bhawiyuga, Adhitya
2018-01-01
Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning. PMID:29617341
i3Drefine Software for Protein 3D Structure Refinement and Its Assessment in CASP10
Bhattacharya, Debswapna; Cheng, Jianlin
2013-01-01
Protein structure refinement refers to the process of improving the qualities of protein structures during structure modeling processes to bring them closer to their native states. Structure refinement has been drawing increasing attention in the community-wide Critical Assessment of techniques for Protein Structure prediction (CASP) experiments since its addition in 8th CASP experiment. During the 9th and recently concluded 10th CASP experiments, a consistent growth in number of refinement targets and participating groups has been witnessed. Yet, protein structure refinement still remains a largely unsolved problem with majority of participating groups in CASP refinement category failed to consistently improve the quality of structures issued for refinement. In order to alleviate this need, we developed a completely automated and computationally efficient protein 3D structure refinement method, i3Drefine, based on an iterative and highly convergent energy minimization algorithm with a powerful all-atom composite physics and knowledge-based force fields and hydrogen bonding (HB) network optimization technique. In the recent community-wide blind experiment, CASP10, i3Drefine (as ‘MULTICOM-CONSTRUCT’) was ranked as the best method in the server section as per the official assessment of CASP10 experiment. Here we provide the community with free access to i3Drefine software and systematically analyse the performance of i3Drefine in strict blind mode on the refinement targets issued in CASP10 refinement category and compare with other state-of-the-art refinement methods participating in CASP10. Our analysis demonstrates that i3Drefine is only fully-automated server participating in CASP10 exhibiting consistent improvement over the initial structures in both global and local structural quality metrics. Executable version of i3Drefine is freely available at http://protein.rnet.missouri.edu/i3drefine/. PMID:23894517
An Automatic Registration Algorithm for 3D Maxillofacial Model
NASA Astrophysics Data System (ADS)
Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng
2016-09-01
3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.
Genetic Algorithm Calibration of Probabilistic Cellular Automata for Modeling Mining Permit Activity
Louis, S.J.; Raines, G.L.
2003-01-01
We use a genetic algorithm to calibrate a spatially and temporally resolved cellular automata to model mining activity on public land in Idaho and western Montana. The genetic algorithm searches through a space of transition rule parameters of a two dimensional cellular automata model to find rule parameters that fit observed mining activity data. Previous work by one of the authors in calibrating the cellular automaton took weeks - the genetic algorithm takes a day and produces rules leading to about the same (or better) fit to observed data. These preliminary results indicate that genetic algorithms are a viable tool in calibrating cellular automata for this application. Experience gained during the calibration of this cellular automata suggests that mineral resource information is a critical factor in the quality of the results. With automated calibration, further refinements of how the mineral-resource information is provided to the cellular automaton will probably improve our model.
A finite element method to correct deformable image registration errors in low-contrast regions
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.
2012-06-01
Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the ‘demons’ registration. For each voxel in the registration's target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the ‘demons’ algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the ‘demons’ algorithm on the computed tomography (CT) images of lung and prostate patients. The performance of the FEM correction relating to the ‘demons’ registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the ‘demons’ registration has the maximum error of 1.2 cm, which can be corrected by the FEM to 0.4 cm, and the average error of the ‘demons’ registration is reduced from 0.17 to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the ‘demons’ algorithm were found unrealistic at several places. In these places, the displacement differences between the ‘demons’ registrations and their FEM corrections were found in the range of 0.4 and 1.1 cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 min of computation time on a 2.6 GHz computer. This study has demonstrated that the FEM can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions.
Local Surface Reconstruction from MER images using Stereo Workstation
NASA Astrophysics Data System (ADS)
Shin, Dongjoe; Muller, Jan-Peter
2010-05-01
The authors present a semi-automatic workflow that reconstructs the 3D shape of the martian surface from local stereo images delivered by PnCam or NavCam on systems such as the NASA Mars Exploration Rover (MER) Mission and in the future the ESA-NASA ExoMars rover PanCam. The process is initiated with manually selected tiepoints on a stereo workstation which is then followed by a tiepoint refinement, stereo-matching using region growing and Levenberg-Marquardt Algorithm (LMA)-based bundle adjustment processing. The stereo workstation, which is being developed by UCL in collaboration with colleagues at the Jet Propulsion Laboratory (JPL) within the EU FP7 ProVisG project, includes a set of practical GUI-based tools that enable an operator to define a visually correct tiepoint via a stereo display. To achieve platform and graphic hardware independence, the stereo application has been implemented using JPL's JADIS graphic library which is written in JAVA and the remaining processing blocks used in the reconstruction workflow have also been developed as a JAVA package to increase the code re-usability, portability and compatibility. Although initial tiepoints from the stereo workstation are reasonably acceptable as true correspondences, it is often required to employ an optional validity check and/or quality enhancing process. To meet this requirement, the workflow has been designed to include a tiepoint refinement process based on the Adaptive Least Square Correlation (ALSC) matching algorithm so that the initial tiepoints can be further enhanced to sub-pixel precision or rejected if they fail to pass the ALSC matching threshold. Apart from the accuracy of reconstruction, it is obvious that the other criterion to assess the quality of reconstruction is the density (or completeness) of reconstruction, which is not attained in the refinement process. Thus, we re-implemented a stereo region growing process, which is a core matching algorithm within the UCL-HRSC reconstruction workflow. This algorithm's performance is reasonable even for close-range imagery so long as the stereo -pair does not too large a baseline displacement. For post-processing, a Bundle Adjustment (BA) is used to optimise the initial calibration parameters, which bootstrap the reconstruction results. Amongst many options for the non-linear optimisation, the LMA has been adopted due to its stability so that the BA searches the best calibration parameters whilst iteratively minimising the re-projection errors of the initial reconstruction points. For the evaluation of the proposed method, the result of the method is compared with the reconstruction from a disparity map provided by JPL using their operational processing system. Visual and quantitative comparison will be presented as well as updated camera parameters. As part of future work, we will investigate a method expediting the processing speed of the stereo region growing process and look into the possibility of extending the use of the stereo workstation to orbital image processing. Such an interactive stereo workstation can also be used to digitize points and line features as well as assess the accuracy of stereo processed results produced from other stereo matching algorithms available from within the consortium and elsewhere. It can also provide "ground truth" when suitably refined for stereo matching algorithms as well as provide visual cues as to why these matching algorithms sometimes fail to mitigate this in the future. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".
Integrated segmentation of cellular structures
NASA Astrophysics Data System (ADS)
Ajemba, Peter; Al-Kofahi, Yousef; Scott, Richard; Donovan, Michael; Fernandez, Gerardo
2011-03-01
Automatic segmentation of cellular structures is an essential step in image cytology and histology. Despite substantial progress, better automation and improvements in accuracy and adaptability to novel applications are needed. In applications utilizing multi-channel immuno-fluorescence images, challenges include misclassification of epithelial and stromal nuclei, irregular nuclei and cytoplasm boundaries, and over and under-segmentation of clustered nuclei. Variations in image acquisition conditions and artifacts from nuclei and cytoplasm images often confound existing algorithms in practice. In this paper, we present a robust and accurate algorithm for jointly segmenting cell nuclei and cytoplasm using a combination of ideas to reduce the aforementioned problems. First, an adaptive process that includes top-hat filtering, Eigenvalues-of-Hessian blob detection and distance transforms is used to estimate the inverse illumination field and correct for intensity non-uniformity in the nuclei channel. Next, a minimum-error-thresholding based binarization process and seed-detection combining Laplacian-of-Gaussian filtering constrained by a distance-map-based scale selection is used to identify candidate seeds for nuclei segmentation. The initial segmentation using a local maximum clustering algorithm is refined using a minimum-error-thresholding technique. Final refinements include an artifact removal process specifically targeted at lumens and other problematic structures and a systemic decision process to reclassify nuclei objects near the cytoplasm boundary as epithelial or stromal. Segmentation results were evaluated using 48 realistic phantom images with known ground-truth. The overall segmentation accuracy exceeds 94%. The algorithm was further tested on 981 images of actual prostate cancer tissue. The artifact removal process worked in 90% of cases. The algorithm has now been deployed in a high-volume histology analysis application.
Enhancements in Deriving Smoke Emission Coefficients from Fire Radiative Power Measurements
NASA Technical Reports Server (NTRS)
Ellison, Luke; Ichoku, Charles
2011-01-01
Smoke emissions have long been quantified after-the-fact by simple multiplication of burned area, biomass density, fraction of above-ground biomass, and burn efficiency. A new algorithm has been suggested, as described in Ichoku & Kaufman (2005), for use in calculating smoke emissions directly from fire radiative power (FRP) measurements such that the latency and uncertainty associated with the previously listed variables are avoided. Application of this new, simpler and more direct algorithm is automatic, based only on a fire's FRP measurement and a predetermined coefficient of smoke emission for a given location. Attaining accurate coefficients of smoke emission is therefore critical to the success of this algorithm. In the aforementioned paper, an initial effort was made to derive coefficients of smoke emission for different large regions of interest using calculations of smoke emission rates from MODIS FRP and aerosol optical depth (AOD) measurements. Further work had resulted in a first draft of a 1 1 resolution map of these coefficients. This poster will present the work done to refine this algorithm toward the first production of global smoke emission coefficients. Main updates in the algorithm include: 1) inclusion of wind vectors to help refine several parameters, 2) defining new methods for calculating the fire-emitted AOD fractions, and 3) calculating smoke emission rates on a per-pixel basis and aggregating to grid cells instead of doing so later on in the process. In addition to a presentation of the methodology used to derive this product, maps displaying preliminary results as well as an outline of the future application of such a product into specific research opportunities will be shown.
Clinical update on optimal prandial insulin dosing using a refined run-to-run control algorithm.
Zisser, Howard; Palerm, Cesar C; Bevier, Wendy C; Doyle, Francis J; Jovanovic, Lois
2009-05-01
This article provides a clinical update using a novel run-to-run algorithm to optimize prandial insulin dosing based on sparse glucose measurements from the previous day's meals. The objective was to use a refined run-to-run algorithm to calculate prandial insulin-to-carbohydrate ratios (I:CHO) for meals of variable carbohydrate content in subjects with type 1 diabetes (T1DM). The open-labeled, nonrandomized study took place over a 6-week period in a nonprofit research center. Nine subjects with T1DM using continuous subcutaneous insulin infusion participated. Basal insulin rates were optimized using continuous glucose monitoring, with a target fasting blood glucose of 90 mg/dl. Subjects monitored blood glucose concentration at the beginning of the meal and at 60 and 120 minutes after the start of the meal. They were instructed to start meals with blood glucose levels between 70 and 130 mg/dl. Subjects were contacted daily to collect data for the previous 24-hour period and to give them the physician-approved, algorithm-derived I:CHO ratios for the next 24 hours. Subjects calculated the amount of the insulin bolus for each meal based on the corresponding I:CHO and their estimate of the meal's carbohydrate content. One- and 2-hour postprandial glucose concentrations served as the main outcome measures. The mean 1-hour postprandial blood glucose level was 104 +/- 19 mg/dl. The 2-hour postprandial levels (96.5 +/- 18 mg/dl) approached the preprandial levels (90.1 +/- 13 mg/dl). Run-to-run algorithms are able to improve postprandial blood glucose levels in subjects with T1DM. 2009 Diabetes Technology Society.
Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin
2018-05-01
To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.
2017-12-01
A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.
An adaptive interpolation scheme for molecular potential energy surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalewski, Markus, E-mail: mkowalew@uci.edu; Larsson, Elisabeth; Heryudono, Alfa
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within amore » given accuracy compared to the non-adaptive version.« less
Miller, Michelle; Thomas, Jolene; Suen, Jenni; Ong, De Sheng; Sharma, Yogesh
2018-05-01
Undernourished patients discharged from the hospital require follow-up; however, attendance at return visits is low. Teleconsultations may allow remote follow-up of undernourished patients; however, no valid method to remotely perform physical examination, a critical component of assessing nutritional status, exists. This study aims to compare agreement between photographs taken by trained dietitians and in-person physical examinations conducted by trained dietitians to rate the overall physical examination section of the scored Patient Generated Subjective Global Assessment (PG-SGA). Nested cross-sectional study. Adults aged ≥60 years, admitted to the general medicine unit at Flinders Medical Centre between March 2015 and March 2016, were eligible. All components of the PG-SGA and photographs of muscle and fat sites were collected from 192 participants either in the hospital or at their place of residence after discharge. Validity of photograph-based physical examination was determined by collecting photographic and PG-SGA data from each participant at one encounter by trained dietitians. A dietitian blinded to data collection later assessed de-identified photographs on a computer. Percentage agreement, weighted kappa agreement, sensitivity, and specificity between the photographs and in-person physical examinations were calculated. All data collected were included in the analysis. Overall, the photograph-based physical examination rating achieved a percentage agreement of 75.8% against the in-person assessment, with a weighted kappa agreement of 0.526 (95% CI: 0.416, 0.637; P<0.05) and a sensitivity-specificity pair of 66.9% (95% CI: 57.8%, 75.0%) and 92.4% (95% CI: 82.5%, 97.2%). Photograph-based physical examination by trained dietitians achieved a nearly acceptable percentage agreement, moderate weighted kappa, and fair sensitivity-specificity pair. Methodological refinement before field testing with other personnel may improve the agreement and accuracy of photograph-based physical examination. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Supporting Mathematical Discussions: The Roles of Comparison and Cognitive Load
ERIC Educational Resources Information Center
Richland, Lindsey E.; Begolli, Kreshnik Nasi; Simms, Nina; Frausel, Rebecca R.; Lyons, Emily A.
2016-01-01
Mathematical discussions in which students compare alternative solutions to a problem can be powerful modes for students to engage and refine their misconceptions into conceptual understanding, as well as to develop understanding of the mathematics underlying common algorithms. At the same time, these discussions are challenging to lead…
Supporting Mathematical Discussions: The Roles of Comparison and Cognitive Load
ERIC Educational Resources Information Center
Richland, Lindsey E.; Begolli, Kreshnik Nasi; Simms, Nina; Frausel, Rebecca R.; Lyons, Emily A.
2017-01-01
Mathematical discussions in which students compare alternative solutions to a problem can be powerful modes for students to engage and refine their misconceptions into conceptual understanding, as well as to develop understanding of the mathematics underlying common algorithms. At the same time, these discussions are challenging to lead…
Quasinormal modes of Reissner-Nordstrom black holes
NASA Technical Reports Server (NTRS)
Leaver, Edward W.
1990-01-01
A matrix-eigenvalue algorithm is presented for accurately computing the quasi-normal frequencies and modes of charged static blackholes. The method is then refined through the introduction of a continued-fraction step. The approach should generalize to a variety of nonseparable wave equations, including the Kerr-Newman case of charged rotating blackholes.
Global soil-climate-biome diagram: linking soil properties to climate and biota
NASA Astrophysics Data System (ADS)
Zhao, X.; Yang, Y.; Fang, J.
2017-12-01
As a critical component of the Earth system, soils interact strongly with both climate and biota and provide fundamental ecosystem services that maintain food, climate, and human security. Despite significant progress in digital soil mapping techniques and the rapidly growing quantity of observed soil information, quantitative linkages between soil properties, climate and biota at the global scale remain unclear. By compiling a large global soil database, we mapped seven major soil properties (bulk density [BD]; sand, silt and clay fractions; soil pH; soil organic carbon [SOC] density [SOCD]; and soil total nitrogen [STN] density [STND]) based on machine learning algorithms (regional random forest [RF] model) and quantitatively assessed the linkage between soil properties, climate and biota at the global scale. Our results demonstrated a global soil-climate-biome diagram, which improves our understanding of the strong correspondence between soils, climate and biomes. Soil pH decreased with greater mean annual precipitation (MAP) and lower mean annual temperature (MAT), and the critical MAP for the transition from alkaline to acidic soil pH decreased with decreasing MAT. Specifically, the critical MAP ranged from 400-500 mm when the MAT exceeded 10 °C but could decrease to 50-100 mm when the MAT was approximately 0 °C. SOCD and STND were tightly linked; both increased in accordance with lower MAT and higher MAP across terrestrial biomes. Global stocks of SOC and STN were estimated to be 788 ± 39.4 Pg (1015 g, or billion tons) and 63 ± 3.3 Pg in the upper 30-cm soil layer, respectively, but these values increased to 1654 ± 94.5 Pg and 133 ± 7.8 Pg in the upper 100-cm soil layer, respectively. These results reveal quantitative linkages between soil properties, climate and biota at the global scale, suggesting co-evolution of the soil, climate and biota under conditions of global environmental change.
Scheduling time-critical graphics on multiple processors
NASA Technical Reports Server (NTRS)
Meyer, Tom W.; Hughes, John F.
1995-01-01
This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.
Empirical comparison of heuristic load distribution in point-to-point multicomputer networks
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.
1990-01-01
The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-05-17
PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.
NASA Technical Reports Server (NTRS)
Hargrove, A.
1982-01-01
Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.
Filament capturing with the multimaterial moment-of-fluid method*
Jemison, Matthew; Sussman, Mark; Shashkov, Mikhail
2015-01-15
A novel method for capturing two-dimensional, thin, under-resolved material configurations, known as “filaments,” is presented in the context of interface reconstruction. This technique uses a partitioning procedure to detect disconnected regions of material in the advective preimage of a cell (indicative of a filament) and makes use of the existing functionality of the Multimaterial Moment-of-Fluid interface reconstruction method to accurately capture the under-resolved feature, while exactly conserving volume. An algorithm for Adaptive Mesh Refinement in the presence of filaments is developed so that refinement is introduced only near the tips of filaments and where the Moment-of-Fluid reconstruction error is stillmore » large. Comparison to the standard Moment-of-Fluid method is made. As a result, it is demonstrated that using filament capturing at a given resolution yields gains in accuracy comparable to introducing an additional level of mesh refinement at significantly lower cost.« less
Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw
2002-01-01
In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ito, Gen; Kobayashi, Takeshi; Takeda, Yoshie
Highlights: • Proteoglycan from salmon nasal cartridge (SNC-PG) promoted wound healing in fibroblast monolayers. • SNC-PG stimulated both cell proliferation and cell migration. • Interaction between chondroitin sulfate-units and CD44 is responsible for the effect. - Abstract: Proteoglycans (PGs) are involved in various cellular functions including cell growth, adhesion, and differentiation; however, their physiological roles are not fully understood. In this study, we examined the effect of PG purified from salmon nasal cartilage (SNC-PG) on wound closure using tissue-cultured cell monolayers, an in vitro wound-healing assay. The results indicated that SNC-PG significantly promoted wound closure in NIH/3T3 cell monolayers bymore » stimulating both cell proliferation and cell migration. SNC-PG was effective in concentrations from 0.1 to 10 μg/ml, but showed much less effect at higher concentrations (100–1000 μg/ml). The effect of SNC-PG was abolished by chondroitinase ABC, indicating that chondroitin sulfates (CSs), a major component of glycosaminoglycans (GAGs) in SNC-PG, are crucial for the SNC-PG effect. Furthermore, chondroitin 6-sulfate (C-6-S), a major CS of SNC-PG GAGs, could partially reproduce the SNC-PG effect and partially inhibit the binding of SNC-PG to cells, suggesting that SNC-PG exerts its effect through an interaction between the GAGs in SNC-PG and the cell surface. Neutralization by anti-CD44 antibodies or CD44 knockdown abolished SNC-PG binding to the cells and the SNC-PG effect on wound closure. These results suggest that interactions between CS-rich GAG-chains of SNC-PG and CD44 on the cell surface are responsible for the SNC-PG effect on wound closure.« less
Weck, Melanie N; Brenner, Hermann
2008-08-15
Helicobacter pylori is a major risk factor for chronic atrophic gastritis (CAG). A large variety of definitions of CAG have been used in epidemiologic studies in the past. The aim of this work was to systematically review and summarize estimates of the association between H. pylori infection and CAG according to the various definitions of CAG. Articles on the association between H. pylori infection and CAG published until July 2007 were identified. Separate meta-analyses were carried out for studies defining CAG based on gastroscopy with biopsy, serum pepsinogen I (PG I) only, the pepsinogen I/pepsinogen II ratio (PG I/PG II ratio) only, or a combination of PG I and the PG I/PG II ratio. Numbers of identified studies and summary odds ratios (OR) (95% confidence intervals) were as follows: gastroscopy with biopsy: n = 34, OR = 6.4 (4.0-10.1); PG I only: n = 13, OR = 0.9 (0.7-1.2); PG I/PG II ratio: n = 8, OR = 7.2 (3.1-16.8); combination of PG I and the PG I/PG II ratio: n = 20, OR = 5.7 (4.4-7.5). Studies with CAG definitions based on gastroscopy with biopsy or the PG I/PG II ratio (alone or in combination with PG I) yield similarly strong associations of H. pylori with CAG. The association is missed entirely in studies where CAG is defined by PG I only. (c) 2008 Wiley-Liss, Inc.
Implementation of an algorithm for cylindrical object identification using range data
NASA Technical Reports Server (NTRS)
Bozeman, Sylvia T.; Martin, Benjamin J.
1989-01-01
One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
An Algorithm for Converting Static Earth Sensor Measurements into Earth Observation Vectors
NASA Technical Reports Server (NTRS)
Harman, R.; Hashmall, Joseph A.; Sedlak, Joseph
2004-01-01
An algorithm has been developed that converts penetration angles reported by Static Earth Sensors (SESs) into Earth observation vectors. This algorithm allows compensation for variation in the horizon height including that caused by Earth oblateness. It also allows pitch and roll to be computed using any number (greater than 1) of simultaneous sensor penetration angles simplifying processing during periods of Sun and Moon interference. The algorithm computes body frame unit vectors through each SES cluster. It also computes GCI vectors from the spacecraft to the position on the Earth's limb where each cluster detects the Earth's limb. These body frame vectors are used as sensor observation vectors and the GCI vectors are used as reference vectors in an attitude solution. The attitude, with the unobservable yaw discarded, is iteratively refined to provide the Earth observation vector solution.
NASA Astrophysics Data System (ADS)
Ebrahimi, Mehdi; Jahangirian, Alireza
2017-12-01
An efficient strategy is presented for global shape optimization of wing sections with a parallel genetic algorithm. Several computational techniques are applied to increase the convergence rate and the efficiency of the method. A variable fidelity computational evaluation method is applied in which the expensive Navier-Stokes flow solver is complemented by an inexpensive multi-layer perceptron neural network for the objective function evaluations. A population dispersion method that consists of two phases, of exploration and refinement, is developed to improve the convergence rate and the robustness of the genetic algorithm. Owing to the nature of the optimization problem, a parallel framework based on the master/slave approach is used. The outcomes indicate that the method is able to find the global optimum with significantly lower computational time in comparison to the conventional genetic algorithm.
Alecu, M; Geleriu, L; Coman, G; Gălăţescu, L
1998-01-01
Serological level of interleukin-1 (IL-1), Interleukin-2 (IL-2), Interleukin-6 (IL-6) and tumour necrosis factor (TNF) alpha was investigated in 26 patients with scleroderma, divided into three lots, by the extension and the progress of the disease. Determinations were performed by ELISA in attack and in remission (after treatment with prednison). Normal values: IL-1 (0-5 pg/ml), IL-2 (0-5 pg/ml), IL-6 (5-15 pg/ml), TNF (0-16 pg/ml). Lot A. Results obtained at the first determination showed that IL-1 is elevated in 4 cases (10-15 pg/ml), IL-2 in 5 cases (10-32 pg/ml), IL-6 in 5 cases (15-42 pg/ml) and TNF in 4 cases (18-34 pg/ml). In the second determination IL-1 was increased in 1 case (8 pg/ml), IL-2 in 1 case (9 pg/ml), IL-6 in 2 cases (12 pg/ml) and TNF was normal. Lot B. In the first determination IL-1 was elevated in 5 cases (8-12 pg/ml), IL-2 in 5 cases (10-15 pg/ml), IL-6 in 7 cases (16-20 pg/ml) and TNF was raised in 3 cases (18-25 pg/ml). At the second determination IL-1 showed normal values in all the cases, IL-2 was raised in 2 cases (10 pg/ml), IL-6 in 2 cases (12.15 pg/ml), TNF in 1 case (20 pg/ml). Lot C. In the first determination there were raised values in 4 cases for IL-1 (6-8 pg/ml), 3 cases for IL-2 (10-18 pg/ml), 5 cases for IL-6 (18-20 pg/ml), 2 cases for TNF (20 pg/ml). At the second determination IL-2 was elevated in 1 case (10 pg/ml), IL-6 in 1 case (15 pg/ml). We consider that in scleroderma there is a disturbance of the investigated cytokines due to the activation and involvement of the secretory cells into the pathogenesis of the disease. The increase of the serological levels of IL-1, IL-2, IL-6 and TNF depends on the extension of the lesions and the clinical and biological activity periods of the disease. The absence of the increase of the serological levels does not exclude their activity at the lesional site.
Yang, Yanqiu; He, Fupo; Ye, Jiandong
2016-12-01
In this study, phosphate-based glass (PG) was used as a sintering aid for freeze-cast porous biphasic calcium phosphate (BCP) ceramic, which was sintered under a lower temperature (1000°C). The phase composition, pore structure, compressive strength, and cytocompatibility of calcium phosphate composite ceramics (PG-BCP) were evaluated. The results indicated that PG additive reacted with calcium phosphate during the sintering process, forming β-Ca2P2O7; the ions of sodium and magnesium from PG partially substituted the calcium sites of β-calcium phosphate in BCP. The PG-BCP showed good cytocompatibility. The pore width of the porous PG-BCP ceramics was around 50μm, regardless of the amount of PG sintering aid. As the content of PG increased from 0wt.% to 15wt.%, the compressive strength of PG-BCP increased from 0.02 MP to 0.28MPa. When the PG additive was 17.5wt.%, the compressive strength of PG-BCP dramatically increased to 5.66MPa. Addition of 15wt.% PG was the critical point for the properties of PG-BCP. PG is considered as an effective sintering aid for freeze-cast porous bioceramics. Copyright © 2016 Elsevier B.V. All rights reserved.
Ruminal and intermediary metabolism of propylene glycol in lactating Holstein cows.
Kristensen, N B; Raun, B M L
2007-10-01
Four lactating Holstein cows fitted with ruminal cannulas and permanent indwelling catheters in the mesenteric artery, mesenteric vein, hepatic portal vein, and hepatic vein were used in a cross-over design to study the metabolism of propylene glycol (PG). Each cow received 2 treatments: control (no infusion) and infusion of 650 g of PG into the rumen at the time of the morning feeding. Propylene glycol was infused on the day of sampling only. Samples of arterial, portal, and hepatic blood as well as ruminal fluid were obtained at 0.5 h before feeding and at 0.5, 1.5, 2.5, 3.5, 5, 7, 9, and 11 h after feeding. Infusion of PG did not affect ruminal pH or the total concentration of ruminal volatile fatty acids, but did decrease the molar proportion of ruminal acetate. The ruminal concentrations of PG, propanol, and propanal as well as the molar proportion of propionate increased with PG infusion. The plasma concentrations of PG, ethanol, propanol, propanal, glucose, L-lactate, propionate, and insulin increased with PG and the plasma concentrations of acetate and beta-hydroxybutyrate decreased. The net portal flux of PG, propanol, and propanal increased with PG. The hepatic uptake of PG was equivalent to 19% of the intraruminal dose. When cows were dosed with PG, the hepatic extraction of PG was between 0 and 10% depending on the plasma concentration of PG, explaining the slow decrease in arterial PG. The increased net hepatic flux of L-lactate with PG could account for the entire hepatic uptake of PG, which suggests that the primary hepatic pathway for PG is oxidation to l-lactate. The hepatic uptake of propanol increased with PG, but no effects of PG on the net hepatic and net splanchnic flux of glucose were observed. Despite no effect of PG on net portal flux and net hepatic flux of propionate, the net splanchnic flux of propionate increased and the data suggest that propionate produced from hepatic metabolism of propanol is partly released to the blood. The data suggest that PG affects metabolism of the cows by 2 modes of action: 1) increased supply of l-lactate and propionate to gluconeogenesis and 2) insulin resistance of peripheral tissues induced by increased concentrations of PG and propanol as well as a decreased ratio of ketogenic to glucogenic metabolites in arterial blood plasma.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Advanced Structural Analyses by Third Generation Synchrotron Radiation Powder Diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakata, M.; Aoyagi, S.; Ogura, T.
2007-01-19
Since the advent of the 3rd generation Synchrotron Radiation (SR) sources, such as SPring-8, the capabilities of SR powder diffraction increased greatly not only in an accurate structure refinement but also ab initio structure determination. In this study, advanced structural analyses by 3rd generation SR powder diffraction based on the Large Debye-Scherrer camera installed at BL02B2, SPring-8 is described. Because of high angular resolution and high counting statistics powder data collected at BL02B2, SPring-8, ab initio structure determination can cope with a molecular crystals with 65 atoms including H atoms. For the structure refinements, it is found that a kindmore » of Maximum Entropy Method in which several atoms are omitted in phase calculation become very important to refine structural details of fairy large molecule in a crystal. It should be emphasized that until the unknown structure is refined very precisely, the obtained structure by Genetic Algorithm (GA) or some other ab initio structure determination method using real space structural knowledge, it is not possible to tell whether the structure obtained by the method is correct or not. In order to determine and/or refine crystal structure of rather complicated molecules, we cannot overemphasize the importance of the 3rd generation SR sources.« less
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
Observations and asteroseismic analysis of the rapidly pulsating hot B subdwarf PG 0911+456
NASA Astrophysics Data System (ADS)
Randall, S. K.; Green, E. M.; Van Grootel, V.; Fontaine, G.; Charpinet, S.; Lesser, M.; Brassard, P.; Sugimoto, T.; Chayer, P.; Fay, A.; Wroblewski, P.; Daniel, M.; Story, S.; Fitzgerald, T.
2007-12-01
Aims:The principal aim of this project is to determine the structural parameters of the rapidly pulsating subdwarf B star PG 0911+456 from asteroseismology. Our work forms part of an ongoing programme to constrain the internal characteristics of hot B subdwarfs with the long-term goal of differentiating between the various formation scenarios proposed for these objects. So far, a detailed asteroseismic interpretation has been carried out for 6 such pulsators, with apparent success. First comparisons with evolutionary theory look promising, however it is clear that more targets are needed for meaningful statistics to be derived. Methods: The observational pulsation periods of PG 0911+456 were extracted from rapid time-series photometry using standard Fourier analysis techniques. Supplemented by spectroscopic estimates of the star's mean atmospheric parameters, they were used as a basis for the “forward modelling” approach in asteroseismology. The latter culminates in the identification of one or more “optimal” models that can accurately reproduce the observed period spectrum. This naturally leads to an identification of the oscillations detected in terms of degree ℓ and radial order k, and infers the structural parameters of the target. Results: The high S/N low- and medium resolution spectroscopy obtained led to a refinement of the atmospheric parameters for PG 0911+456, the derived values being T_eff = 31 940 ± 220 K, log g = 5.767 ± 0.029, and log He/H = -2.548 ± 0.058. From the photometry it was possible to extract 7 independent pulsation periods in the 150-200 s range with amplitudes between 0.05 and 0.8% of the star's mean brightness. There was no indication of fine frequency splitting over the 68-day time baseline, suggesting a very slow rotation rate. An asteroseismic search of parameter space identified several models that matched the observed properties of PG 0911+456 well, one of which was isolated as the “optimal” model on the basis of spectroscopic and mode identification considerations. All the observed pulsations are identified with low-order acoustic modes with degree indices ℓ = 0,1,2 and 4, and match the computed periods with a dispersion of only 0.26%, typical of the asteroseismological studies carried out to date for this type of star. The inferred structural parameters of PG 0911+456 are T_eff = 31 940 ± 220 K (from spectroscopy), log {g} = 5.777 ± 0.002, Mast/M⊙ = 0.39 ± 0.01, log{M_env/Mast} = -4.69 ± 0.07, R/R⊙ = 0.133 ± 0.001 and L/L⊙ = 16.4 ± 0.8. We also derive the absolute magnitude MV = 4.82 ± 0.04 and a distance d = 930.3 ± 27.4 pc. This study made extensive use of the computing facilities offered by the Calcul en Midi-Pyrénées (CALMIP) project and the Centre Informatique National de l'Enseignement Supérieur (CINES), France. Some of the spectroscopic observations reported here were obtained at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution.
Timm, Kerstin N; Hartl, Johannes; Keller, Markus A; Hu, De-En; Kettunen, Mikko I; Rodrigues, Tiago B; Ralser, Markus; Brindle, Kevin M
2015-12-01
A resonance at ∼181 ppm in the (13) C spectra of tumors injected with hyperpolarized [U-(2) H, U-(13) C]glucose was assigned to 6-phosphogluconate (6PG), as in previous studies in yeast, whereas in breast cancer cells in vitro this resonance was assigned to 3-phosphoglycerate (3PG). These peak assignments were investigated here using measurements of 6PG and 3PG (13) C-labeling using liquid chromatography tandem mass spectrometry (LC-MS/MS) METHODS: Tumor-bearing mice were injected with (13) C6 glucose and the (13) C-labeled and total 6PG and 3PG concentrations measured. (13) C MR spectra of glucose-6-phosphate dehydrogenase deficient (zwf1Δ) and wild-type yeast were acquired following addition of hyperpolarized [U-(2) H, U-(13) C]glucose and again (13) C-labeled and total 6PG and 3PG were measured by LC-MS/MS RESULTS: Tumor (13) C-6PG was more abundant than (13) C-2PG/3PG and the resonance at ∼181 ppm matched more closely that of 6PG. (13) C MR spectra of wild-type and zwf1Δ yeast cells showed a resonance at ∼181 ppm after labeling with hyperpolarized [U-(2) H, U-(13) C]glucose, however, there was no 6PG in zwf1Δ cells. In the wild-type cells 3PG was approximately four-fold more abundant than 6PG CONCLUSION: The resonance at ∼181 ppm in (13) C MR spectra following injection of hyperpolarized [U-(2) H, U-(13) C]glucose originates predominantly from 6PG in EL4 tumors and 3PG in yeast cells. © 2014 Wiley Periodicals, Inc.
Paddock, Ethan; Looker, Helen C; Piaggi, Paolo; Knowler, William C; Krakoff, Jonathan; Chang, Douglas C
2018-06-01
We compared the ability of 1- and 2-h plasma glucose concentrations (1h-PG and 2h-PG, respectively), derived from a 75-g oral glucose tolerance test (OGTT), to predict retinopathy. 1h-PG and 2h-PG concentrations, measured in a longitudinal study of an American Indian community in the southwestern U.S., a population at high risk for type 2 diabetes, were analyzed to assess the usefulness of the 1h-PG to identify risk of diabetic retinopathy (DR). Cross-sectional ( n = 2,895) and longitudinal ( n = 1,703) cohorts were assessed for the prevalence and incidence of DR, respectively, in relation to deciles of 1h-PG and 2h-PG concentrations. Areas under the receiver operating characteristic (ROC) curves for 1h-PG and 2h-PG were compared with regard to predicting DR, as assessed by direct ophthalmoscopy. Prevalence and incidence of DR, based on direct ophthalmoscopy, changed in a similar manner across the distributions of 1h-PG and 2h-PG concentrations. ROC analysis showed that 1h-PG and 2h-PG were of similar value in identifying prevalent and incident DR using direct ophthalmoscopy. 1h-PG cut points of 230 and 173 mg/dL were comparable to 2h-PG cut points of 200 mg/dL (type 2 diabetes) and 140 mg/dL (impaired glucose tolerance), respectively. 1h-PG is a useful predictor of retinopathy risk, has a predictive value similar to that of 2h-PG, and may be considered as an alternative glucose time point during an OGTT. © 2018 by the American Diabetes Association.
Carbon Emissions from Deforestation in the Brazilian Amazon Region
NASA Technical Reports Server (NTRS)
Potter, C.; Klooster, S.; Genovese, V.
2009-01-01
A simulation model based on satellite observations of monthly vegetation greenness from the Moderate Resolution Imaging Spectroradiometer (MODIS) was used to estimate monthly carbon fluxes in terrestrial ecosystems of Brazilian Amazon and Cerrado regions over the period 2000-2002. The NASA-CASA (Carnegie Ames Stanford Approach) model estimates of annual forest production were used for the first time as the basis to generate a prediction for the standing pool of carbon in above-ground biomass (AGB; gC/sq m) for forested areas of the Brazilian Amazon region. Plot-level measurements of the residence time of carbon in wood in Amazon forest from Malhi et al. (2006) were interpolated by inverse distance weighting algorithms and used with CASA to generate a new regional map of AGB. Data from the Brazilian PRODES (Estimativa do Desflorestamento da Amazonia) project were used to map deforested areas. Results show that net primary production (NPP) sinks for carbon varied between 4.25 Pg C/yr (1 Pg=10(exp 15)g) and 4.34 Pg C for the region and were highest across the eastern and northern Amazon areas, whereas deforestation sources of CO2 flux from decomposition of residual woody debris were higher and less seasonal in the central Amazon than in the eastern and southern areas. Increased woody debris from past deforestation events was predicted to alter the net ecosystem carbon balance of the Amazon region to generate annual CO2 source fluxes at least two times higher than previously predicted by CASA modeling studies. Variations in climate, land cover, and forest burning were predicted to release carbon at rates of 0.5 to 1 Pg C/yr from the Brazilian Amazon. When direct deforestation emissions of CO2 from forest burning of between 0.2 and 0.6 Pg C/yr in the Legal Amazon are overlooked in regional budgets, the year-to-year variations in this net biome flux may appear to be large, whereas our model results implies net biome fluxes had actually been relatively consistent from year to year during the period 2000-2002. This is the first study to use MODIS data to model all carbon pools (wood, leaf, root) dynamically in simulations of Amazon forest deforestation from clearing and burning of all kinds.
Preparation of Conductive Polymer Graphite (PG) Composites
NASA Astrophysics Data System (ADS)
Munirah Abdullah, Nur; Saddam Kamarudin, M.; Rus, Anika Zafiah M.; Abdullah, M. F. L.
2017-08-01
The preparation of conductive polymer graphite (PG) composites thin film is described. The thickness of the PG composites due to slip casting method was set approximately ~0.1 mm. The optical microscope (OM) and fourier transform infra-red spectroscopy (FTIR) has been operated to distinguish the structure-property relationships scheme of PG composites. It shows that the graphite is homogenously dispersed in polymer matrix composites. The electrical characteristics of the PG composite were measured at room temperature and the electrical conductivity (σ) was discovered with respect of its resistivity (Ω). By achieving conductivity of 103 S/m, it is proven that at certain graphite weight loading (PG20, PG25 and PG30) attributes to electron pathway in PG composites.
NASA Technical Reports Server (NTRS)
Velden, Christopher
1995-01-01
The research objectives in this proposal were part of a continuing program at UW-CIMSS to develop and refine an automated geostationary satellite winds processing system which can be utilized in both research and operational environments. The majority of the originally proposed tasks were successfully accomplished, and in some cases the progress exceeded the original goals. Much of the research and development supported by this grant resulted in upgrades and modifications to the existing automated satellite winds tracking algorithm. These modifications were put to the test through case study demonstrations and numerical model impact studies. After being successfully demonstrated, the modifications and upgrades were implemented into the NESDIS algorithms in Washington DC, and have become part of the operational support. A major focus of the research supported under this grant attended to the continued development of water vapor tracked winds from geostationary observations. The fully automated UW-CIMSS tracking algorithm has been tuned to provide complete upper-tropospheric coverage from this data source, with data set quality close to that of operational cloud motion winds. Multispectral water vapor observations were collected and processed from several different geostationary satellites. The tracking and quality control algorithms were tuned and refined based on ground-truth comparisons and case studies involving impact on numerical model analyses and forecasts. The results have shown the water vapor motion winds are of good quality, complement the cloud motion wind data, and can have a positive impact in NWP on many meteorological scales.
Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
NASA Astrophysics Data System (ADS)
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
Compact modalities for forward-error correction
NASA Astrophysics Data System (ADS)
Fang, Dejian
2013-10-01
Hash tables [1] must work. In fact, few leading analysts would disagree with the refinement of thin clients. In our research, we disprove not only that the infamous read-write algorithm for the exploration of object-oriented languages by W. White et al. is NP-complete, but that the same is true for the lookaside buffer.
Reasoning abstractly about resources
NASA Technical Reports Server (NTRS)
Clement, B.; Barrett, A.
2001-01-01
r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Jihui; Zakhor, Avideh
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions
NASA Astrophysics Data System (ADS)
Moore, Peter K.
2007-06-01
In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.
Formulation and Implementation of Inflow/Outflow Boundary Conditions to Simulate Propulsive Effects
NASA Technical Reports Server (NTRS)
Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian
2018-01-01
Boundary conditions appropriate for simulating flow entering or exiting the computational domain to mimic propulsion effects have been implemented in an adaptive Cartesian simulation package. A robust iterative algorithm to control mass flow rate through an outflow boundary surface is presented, along with a formulation to explicitly specify mass flow rate through an inflow boundary surface. The boundary conditions have been applied within a mesh adaptation framework based on the method of adjoint-weighted residuals. This allows for proper adaptive mesh refinement when modeling propulsion systems. The new boundary conditions are demonstrated on several notional propulsion systems operating in flow regimes ranging from low subsonic to hypersonic. The examples show that the prescribed boundary state is more properly imposed as the mesh is refined. The mass-flowrate steering algorithm is shown to be an efficient approach in each example. To demonstrate the boundary conditions on a realistic complex aircraft geometry, two of the new boundary conditions are also applied to a modern low-boom supersonic demonstrator design with multiple flow inlets and outlets.
Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon
Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.
Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang
2011-01-01
I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Point Cloud Based Approach to Stem Width Extraction of Sorghum
Jin, Jihui; Zakhor, Avideh
2017-01-29
A revolution in the field of genomics has produced vast amounts of data and furthered our understanding of the genotypephenotype map, but is currently constrained by manually intensive or limited phenotype data collection. We propose an algorithm to estimate stem width, a key characteristic used for biomass potential evaluation, from 3D point cloud data collected by a robot equipped with a depth sensor in a single pass in a standard field. The algorithm applies a two step alignment to register point clouds in different frames, a Frangi filter to identify stemlike objects in the point cloud and an orientation basedmore » filter to segment out and refine individual stems for width estimation. Individually, detected stems which are split due to occlusions are merged and then registered with previously found stems in previous camera frames in order to track temporally. We then refine the estimates to produce an accurate histogram of width estimates per plot. Since the plants in each plot are genetically identical, distributions of the stem width per plot can be useful in identifying genetically superior sorghum for biofuels.« less
Modeling NIF experimental designs with adaptive mesh refinement and Lagrangian hydrodynamics
NASA Astrophysics Data System (ADS)
Koniges, A. E.; Anderson, R. W.; Wang, P.; Gunney, B. T. N.; Becker, R.; Eder, D. C.; MacGowan, B. J.; Schneider, M. B.
2006-06-01
Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.
Array distribution in data-parallel programs
NASA Technical Reports Server (NTRS)
Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Sheffler, Thomas J.
1994-01-01
We consider distribution at compile time of the array data in a distributed-memory implementation of a data-parallel program written in a language like Fortran 90. We allow dynamic redistribution of data and define a heuristic algorithmic framework that chooses distribution parameters to minimize an estimate of program completion time. We represent the program as an alignment-distribution graph. We propose a divide-and-conquer algorithm for distribution that initially assigns a common distribution to each node of the graph and successively refines this assignment, taking computation, realignment, and redistribution costs into account. We explain how to estimate the effect of distribution on computation cost and how to choose a candidate set of distributions. We present the results of an implementation of our algorithms on several test problems.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Processing Digital Imagery to Enhance Perceptions of Realism
NASA Technical Reports Server (NTRS)
Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur
2003-01-01
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.
Xiu, Hao; Nuruzzaman, Mohammed; Guo, Xiangqian; Cao, Hongzhe; Huang, Jingjia; Chen, Xianghui; Wu, Kunlu; Zhang, Ru; Huang, Yuzhao; Luo, Junli; Luo, Zhiyong
2016-03-04
Despite the importance of WRKY genes in plant physiological processes, little is known about their roles in Panax ginseng C.A. Meyer. Forty-eight unigenes on this species were previously reported as WRKY transcripts using the next-generation sequencing (NGS) technology. Subsequently, one gene that encodes PgWRKY1 protein belonging to subgroup II-d was cloned and functionally characterized. In this study, eight WRKY genes from the NGS-based transcriptome sequencing dataset designated as PgWRKY2-9 have been cloned and characterized. The genes encoding WRKY proteins were assigned to WRKY Group II (one subgroup II-c, four subgroup II-d, and three subgroup II-e) based on phylogenetic analysis. The cDNAs of the cloned PgWRKYs encode putative proteins ranging from 194 to 358 amino acid residues, each of which includes one WRKYGQK sequence motif and one C₂H₂-type zinc-finger motif. Quantitative real-time PCR (qRT-PCR) analysis demonstrated that the eight analyzed PgWRKY genes were expressed at different levels in various organs including leaves, roots, adventitious roots, stems, and seeds. Importantly, the transcription responses of these PgWRKYs to methyl jasmonate (MeJA) showed that PgWRKY2, PgWRKY3, PgWRKY4, PgWRKY5, PgWRKY6, and PgWRKY7 were downregulated by MeJA treatment, while PgWRKY8 and PgWRKY9 were upregulated to varying degrees. Moreover, the PgWRKY genes increased or decreased by salicylic acid (SA), abscisic acid (ABA), and NaCl treatments. The results suggest that the PgWRKYs may be multiple stress-inducible genes responding to both salt and hormones.
Dou, Y.; Rutanhira, H.; Chen, X.; Mishra, A.; Wang, C.; Fletcher, H.M.
2018-01-01
Summary In Porphyromonas gingivalis, the protein PG1660, composed of 174 amino acids, is annotated as an extracytoplasmic function (ECF) sigma factor (RpoE homologue-σ24). Because PG1660 can modulate several virulence factors and responds to environmental signals in P. gingivalis, its genetic properties were evaluated. PG1660 is co-transcribed with its downstream gene PG1659, and the transcription start site was identified as adenine residue 54-nucleotides upstream of the ATG translation start codon. In addition to binding its own promoter, using the purified rPG1660 and RNAP core enzyme from Escherichia coli with the PG1660 promoter DNA as template, the function of PG1660 as a sigma factor was demonstrated in an in vitro transcription assay. Transcriptome analyses of a P. gingivalis PG1660-defective isogenic mutant revealed that under oxidative stress conditions 176 genes including genes involved in secondary metabolism were downregulated more than two-fold compared with the parental strain. The rPG1660 protein also showed the ability to bind to the promoters of the highly downregulated genes in the PG1660-deficient mutant. As the ECF sigma factor PG0162 has a 29% identity with PG1660 and can modulate its expression, the cross-talk between their regulatory networks was explored. The expression profile of the PG0162PG1660-deficient mutant (P. gingivalis FLL356) revealed that the type IX secretion system genes and several virulence genes were downregulated under hydrogen peroxide stress conditions. Taken together, we have confirmed that PG1660 can function as a sigma factor, and plays an important regulatory role in the oxidative stress and virulence regulatory network of P. gingivalis. PMID:29059500
Iron Abundance in the Prototype PG 1159 Star, GW Vir Pulsator PG 1159-035, and Related Objects
NASA Technical Reports Server (NTRS)
Werner, K.; Rauch, T.; Kruk, J. W.; Kurucz, R. L.
2011-01-01
We performed an iron abundance determination of the hot, hydrogen deficient post-AGB star PG 1159-035. which is the prototype of the PG 1159 spectral class and the GW Vir pulsators, and of two related objects (PG 1520+525, PG 1144+005), based on the first detection of Fe VIII lines in stellar photospheres. In another PG 1159 star. PG 1424+535. we detect Fe VII lines. In all four stars, each within T(sub eff) = 110,000-150,000 K, we find a solar iron abundance. This result agrees with our recent abundance analysis of the hottest PG 1159 stars (T(sub eff) = 150,000-200,000 K) that exhibit Fe x lines. On the whole, we find that the PG 1159 stars are not significantly iron deficient, in contrast to previous notions.
He, Fupo; Zhang, Jing; Yang, Fanwen; Zhu, Jixiang; Tian, Xiumei; Chen, Xiaoming
2015-05-01
The robust calcium carbonate composite ceramics (CC/PG) can be acquired by fast sintering calcium carbonate at a low temperature (650 °C) using a biocompatible, degradable phosphate-based glass (PG) as sintering agent. In the present study, the in vitro degradation and cell response of CC/PG were assessed and compared with 4 synthetic bone substitute materials, calcium carbonate ceramic (CC), PG, hydroxyapatite (HA) and β-tricalcium phosphate (β-TCP) ceramics. The degradation rates in decreasing order were as follows: PG, CC, CC/PG, β-TCP, and HA. The proliferation of rat bone mesenchymal stem cells (rMSCs) cultured on the CC/PG was comparable with that on CC and PG, but inferior to HA and β-TCP. The alkaline phosphatase (ALP) activity of rMSCs on CC/PG was lower than PG, comparable with β-TCP, but higher than HA. The rMSCs on CC/PG and PG had enhanced gene expression in specific osteogenic markers, respectively. Compared to HA and β-TCP, the rMSCs on the CC/PG expressed relatively lower level of collagen I and runt-related transcription factor 2, but showed more considerable expression of osteopontin. Although CC, PG, HA, and β-TCP possessed impressive performances in some specific aspects, they faced extant intrinsic drawbacks in either degradation rate or mechanical strength. Based on considerable compressive strength, moderate degradation rate, good cell response, and being free of obvious shortcoming, the CC/PG is promising as another choice for bone substitute materials. Copyright © 2015 Elsevier B.V. All rights reserved.
Zonnevijlle, Erik D H; Perez-Abadia, Gustavo; Stremel, Richard W; Maldonado, Claudio J; Kon, Moshe; Barker, John H
2003-11-01
Muscle tissue transplantation applied to regain or dynamically assist contractile functions is known as 'dynamic myoplasty'. Success rates of clinical applications are unpredictable, because of lack of endurance, ischemic lesions, abundant scar formation and inadequate performance of tasks due to lack of refined control. Electrical stimulation is used to control dynamic myoplasties and should be improved to reduce some of these drawbacks. Sequential segmental neuromuscular stimulation improves the endurance and closed-loop control offers refinement in rate of contraction of the muscle, while function-controlling stimulator algorithms present the possibility of performing more complex tasks. An acute feasibility study was performed in anaesthetised dogs combining these techniques. Electrically stimulated gracilis-based neo-sphincters were compared to native sphincters with regard to their ability to maintain continence. Measurements were made during fast bladder pressure changes, static high bladder pressure and slow filling of the bladder, mimicking among others posture changes, lifting heavy objects and diuresis. In general, neo-sphincter and native sphincter performance showed no significant difference during these measurements. However, during high bladder pressures reaching 40 cm H(2)O the neo-sphincters maintained positive pressure gradients, whereas most native sphincters relaxed. During slow filling of the bladder the neo-sphincters maintained a controlled positive pressure gradient for a prolonged time without any form of training. Furthermore, the accuracy of these maintained pressure gradients proved to be within the limits set up by the native sphincters. Refinements using more complicated self-learning function-controlling algorithms proved to be effective also and are briefly discussed. In conclusion, a combination of sequential stimulation, closed-loop control and function-controlling algorithms proved feasible in this dynamic graciloplasty-model. Neo-sphincters were created, which would probably provide an acceptable performance, when the stimulation system could be implanted and further tested. Sizing this technique down to implantable proportions seems to be justified and will enable exploration of the possible benefits.
Purification and characterization of tomato polygalacturonase converter.
Pressey, R
1984-10-15
Extracts of ripe tomatoes contain two forms of polygalacturonase (PG I and PG II). A heat-stable component that binds PG II to produce PG I has been isolated from tomato fruit. This component has been named polygalacturonase converter (PG converter). The PG converter has been purified by gel filtration, ion-exchange chromatography and chromatofocusing. It appears to be a protein with a relative molecular mass of 102000. It was readily inactivated by papain and pronase. The converter was labile at alkaline conditions, and treatment of PG I at pH 11 released free PG II. A similar factor with a lower molecular mass was extracted from tomato foliage.
Data Assimilation Experiments Using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2009-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains a number of significant improvements over Version 4. Two very significant improvements are described briefly below. 1) The AIRS Science Team Radiative Transfer Algorithm (RTA) has now been upgraded to accurately account for effects of non-local thermodynamic equilibrium on the AIRS observations. This allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval algorithm during both day and night. Following theoretical considerations, tropospheric temperature profile information is obtained almost exclusively from clear column radiances in the 4.3 micron CO2 band in the AIRS Version 5 temperature profile retrieval step. These clear column radiances are a derived product that are indicative of radiances AIRS channels would have seen if the field of view were completely clear. Clear column radiances for all channels are determined using tropospheric sounding 15 micron CO2 observations. This approach allows for the generation of accurate values of clear column radiances and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel clear column radiances. These error estimates are used for quality control of the retrieved products. Based on error estimate thresholds, each temperature profiles is assigned a characteristic pressure, pg, down to which the profile is characterized as good for use for data assimilation purposes. We have conducted forecast impact experiments assimilating AIRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM, at a spatial resolution of 0.5 deg by 0.5 deg. Assimilation of Quality Controlled AIRS temperature profiles down to pg resulted in significantly improved forecast skill compared to that obtained from experiments when all data used operationally by NCEP, except for AIRS data, is assimilated. These forecasts were also significantly better than to those obtained when AIRS radiances (rather than temperature profiles) are assimilated, which is the way AIRS data is used operationally by NCEP and ECMWF.
An efficicient data structure for three-dimensional vertex based finite volume method
NASA Astrophysics Data System (ADS)
Akkurt, Semih; Sahin, Mehmet
2017-11-01
A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.
A Partitioning Algorithm for Block-Diagonal Matrices With Overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina
2008-02-02
We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less
AWARE - The Automated EUV Wave Analysis and REduction algorithm
NASA Astrophysics Data System (ADS)
Ireland, J.; Inglis; A. R.; Shih, A. Y.; Christe, S.; Mumford, S.; Hayes, L. A.; Thompson, B. J.
2016-10-01
Extreme ultraviolet (EUV) waves are large-scale propagating disturbances observed in the solar corona, frequently associated with coronal mass ejections and flares. Since their discovery over two hundred papers discussing their properties, causes and physics have been published. However, their fundamental nature and the physics of their interactions with other solar phenomena are still not understood. To further the understanding of EUV waves, and their relation to other solar phenomena, we have constructed the Automated Wave Analysis and REduction (AWARE) algorithm for the detection of EUV waves over the full Sun. The AWARE algorithm is based on a novel image processing approach to isolating the bright wavefront of the EUV as it propagates across the corona. AWARE detects the presence of a wavefront, and measures the distance, velocity and acceleration of that wavefront across the Sun. Results from AWARE are compared to results from other algorithms for some well known EUV wave events. Suggestions are also give for further refinements to the basic algorithm presented here.
Multiscale Simulations of Magnetic Island Coalescence
NASA Technical Reports Server (NTRS)
Dorelli, John C.
2010-01-01
We describe a new interactive parallel Adaptive Mesh Refinement (AMR) framework written in the Python programming language. This new framework, PyAMR, hides the details of parallel AMR data structures and algorithms (e.g., domain decomposition, grid partition, and inter-process communication), allowing the user to focus on the development of algorithms for advancing the solution of a systems of partial differential equations on a single uniform mesh. We demonstrate the use of PyAMR by simulating the pairwise coalescence of magnetic islands using the resistive Hall MHD equations. Techniques for coupling different physics models on different levels of the AMR grid hierarchy are discussed.
Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling
NASA Astrophysics Data System (ADS)
Rastigejev, Y.
2011-12-01
Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.
A 2D range Hausdorff approach to 3D facial recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin
2004-11-01
This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Remote Sensing of Particulate Organic Carbon Pools in the High-Latitude Oceans
NASA Technical Reports Server (NTRS)
Stramski, Dariusz; Stramska, Malgorzata
2005-01-01
The general goal of this project was to characterize spatial distributions at basin scales and variability on monthly to interannual timescales of particulate organic carbon (POC) in the high-latitude oceans. The primary objectives were: (1) To collect in situ data in the north polar waters of the Atlantic and in the Southern Ocean, necessary for the derivation of POC ocean color algorithms for these regions. (2) To derive regional POC algorithms and refine existing regional chlorophyll (Chl) algorithms, to develop understanding of processes that control bio-optical relationships underlying ocean color algorithms for POC and Chl, and to explain bio-optical differentiation between the examined polar regions and within the regions. (3) To determine basin-scale spatial patterns and temporal variability on monthly to interannual scales in satellite-derived estimates of POC and Chl pools in the investigated regions for the period of time covered by SeaWiFS and MODIS missions.
A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng
2017-12-01
A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.
Validation of circulating BNP level >1000 pg/ml in all-cause mortality: A retrospective study.
Sakamoto, Daisuke; Sakamoto, Shigeru; Kanda, Tsugiyasu
2015-08-01
To determine the primary diseases and prognoses of patients with highly elevated levels of B-type natriuretic peptide (BNP; >1000 pg/ml), with or without heart failure. Medical records and echocardiograms of patients with BNP levels that fell within one of three predetermined categories (>1000 pg/ml, 200-1000 pg/ml and <200 pg/ml) were retrospectively reviewed. There were no significant between-group differences in duration of hospitalization. Patients with BNP levels >1000 pg/ml (n = 103) or 200-1000 pg/ml (n = 100) had significantly worse 3-year survival than those with BNP levels <200 pg/ml (n = 100). The majority of patients (64/103) in the BNP >1000 pg/ml group had heart failure. The main cause of death in patients with other causes of BNP levels >1000 pg/ml (39/103) was community acquired pneumonia. A BNP level >1000 pg/ml has clinical importance in primary care medicine and hospital settings. © The Author(s) 2015.
46 CFR 52.01-135 - Inspection and tests (modifies PG-90 through PG-100).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Inspection and tests (modifies PG-90 through PG-100). 52.01-135 Section 52.01-135 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-135 Inspection and tests (modifies PG-90 through PG-100). (a) Requirements. Inspection and test...
46 CFR 52.01-105 - Piping, valves and fittings (modifies PG-58 and PG-59).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Piping, valves and fittings (modifies PG-58 and PG-59). 52.01-105 Section 52.01-105 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-105 Piping, valves and fittings (modifies PG-58 and PG-59). (a) Boiler external piping within...
46 CFR 52.01-140 - Certification by stamping (modifies PG-104 through PG-113).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Certification by stamping (modifies PG-104 through PG-113). 52.01-140 Section 52.01-140 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-140 Certification by stamping (modifies PG-104 through PG-113). (a) All boilers built in...
46 CFR 52.01-135 - Inspection and tests (modifies PG-90 through PG-100).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Inspection and tests (modifies PG-90 through PG-100). 52.01-135 Section 52.01-135 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-135 Inspection and tests (modifies PG-90 through PG-100). (a) Requirements. Inspection and test...
46 CFR 52.01-140 - Certification by stamping (modifies PG-104 through PG-113).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Certification by stamping (modifies PG-104 through PG-113). 52.01-140 Section 52.01-140 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-140 Certification by stamping (modifies PG-104 through PG-113). (a) All boilers built in...
46 CFR 52.01-145 - Manufacturers' data report forms (modifies PG-112 and PG-113).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Manufacturers' data report forms (modifies PG-112 and PG-113). 52.01-145 Section 52.01-145 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-145 Manufacturers' data report forms (modifies PG-112 and PG-113). The manufacturers'...
46 CFR 52.01-145 - Manufacturers' data report forms (modifies PG-112 and PG-113).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Manufacturers' data report forms (modifies PG-112 and PG-113). 52.01-145 Section 52.01-145 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-145 Manufacturers' data report forms (modifies PG-112 and PG-113). The manufacturers'...
46 CFR 52.01-145 - Manufacturers' data report forms (modifies PG-112 and PG-113).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Manufacturers' data report forms (modifies PG-112 and PG-113). 52.01-145 Section 52.01-145 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-145 Manufacturers' data report forms (modifies PG-112 and PG-113). The manufacturers'...
46 CFR 52.01-105 - Piping, valves and fittings (modifies PG-58 and PG-59).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Piping, valves and fittings (modifies PG-58 and PG-59). 52.01-105 Section 52.01-105 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-105 Piping, valves and fittings (modifies PG-58 and PG-59). (a) Boiler external piping within...
46 CFR 52.01-140 - Certification by stamping (modifies PG-104 through PG-113).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Certification by stamping (modifies PG-104 through PG-113). 52.01-140 Section 52.01-140 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-140 Certification by stamping (modifies PG-104 through PG-113). (a) All boilers built in...
46 CFR 52.01-140 - Certification by stamping (modifies PG-104 through PG-113).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Certification by stamping (modifies PG-104 through PG-113). 52.01-140 Section 52.01-140 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-140 Certification by stamping (modifies PG-104 through PG-113). (a) All boilers built in...
46 CFR 52.01-135 - Inspection and tests (modifies PG-90 through PG-100).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Inspection and tests (modifies PG-90 through PG-100). 52.01-135 Section 52.01-135 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-135 Inspection and tests (modifies PG-90 through PG-100). (a) Requirements. Inspection and test...
46 CFR 52.01-145 - Manufacturers' data report forms (modifies PG-112 and PG-113).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Manufacturers' data report forms (modifies PG-112 and PG-113). 52.01-145 Section 52.01-145 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-145 Manufacturers' data report forms (modifies PG-112 and PG-113). The manufacturers'...
46 CFR 52.01-105 - Piping, valves and fittings (modifies PG-58 and PG-59).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Piping, valves and fittings (modifies PG-58 and PG-59). 52.01-105 Section 52.01-105 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-105 Piping, valves and fittings (modifies PG-58 and PG-59). (a) Boiler external piping within...
46 CFR 52.01-90 - Materials (modifies PG-5 through PG-13).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Materials (modifies PG-5 through PG-13). 52.01-90 Section 52.01-90 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-90 Materials (modifies PG-5 through PG-13). (a) Material subject to stress due to pressure must conform to...
46 CFR 52.01-90 - Materials (modifies PG-5 through PG-13).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Materials (modifies PG-5 through PG-13). 52.01-90 Section 52.01-90 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-90 Materials (modifies PG-5 through PG-13). (a) Material subject to stress due to pressure must conform to...
46 CFR 52.01-140 - Certification by stamping (modifies PG-104 through PG-113).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Certification by stamping (modifies PG-104 through PG-113). 52.01-140 Section 52.01-140 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-140 Certification by stamping (modifies PG-104 through PG-113). (a) All boilers built in...
46 CFR 52.01-135 - Inspection and tests (modifies PG-90 through PG-100).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Inspection and tests (modifies PG-90 through PG-100). 52.01-135 Section 52.01-135 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-135 Inspection and tests (modifies PG-90 through PG-100). (a) Requirements. Inspection and test...
46 CFR 52.01-145 - Manufacturers' data report forms (modifies PG-112 and PG-113).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Manufacturers' data report forms (modifies PG-112 and PG-113). 52.01-145 Section 52.01-145 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-145 Manufacturers' data report forms (modifies PG-112 and PG-113). The manufacturers'...
46 CFR 52.01-90 - Materials (modifies PG-5 through PG-13).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Materials (modifies PG-5 through PG-13). 52.01-90 Section 52.01-90 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-90 Materials (modifies PG-5 through PG-13). (a) Material subject to stress due to pressure must conform to...
Resistance Phenotypes Mediated by Aminoacyl-Phosphatidylglycerol Synthases
Arendt, Wiebke; Hebecker, Stefanie; Jäger, Sonja; Nimtz, Manfred
2012-01-01
The specific aminoacylation of the phospholipid phosphatidylglycerol (PG) with alanine or with lysine catalyzed by aminoacyl-phosphatidylglycerol synthases (aaPGS) was shown to render various organisms less susceptible to antibacterial agents. This study makes use of Pseudomonas aeruginosa chimeric mutant strains producing lysyl-phosphatidylglycerol (L-PG) instead of the naturally occurring alanyl-phosphatidylglycerol (A-PG) to study the resulting impact on bacterial resistance. Consequences of such artificial phospholipid composition were studied in the presence of an overall of seven antimicrobials (β-lactams, a lipopeptide antibiotic, cationic antimicrobial peptides [CAMPs]) to quantitatively assess the effect of A-PG substitution (with L-PG, L-PG and A-PG, increased A-PG levels). For the employed Gram-negative P. aeruginosa model system, an exclusive charge repulsion mechanism does not explain the attenuated antimicrobial susceptibility due to PG modification. Additionally, the specificity of nine orthologous aaPGS enzymes was experimentally determined. The newly characterized protein sequences allowed for the establishment of a significant group of A-PG synthase sequences which were bioinformatically compared to the related group of L-PG synthesizing enzymes. The analysis revealed a diverse origin for the evolution of A-PG and L-PG synthases, as the specificity of an individual enzyme is not reflected in terms of a characteristic sequence motif. This finding is relevant for future development of potential aaPGS inhibitors. PMID:22267511
Tomassen, Monic M M; Barrett, Diane M; van der Valk, Henry C P M; Woltering, Ernst J
2007-01-01
An important aspect of the ripening process of tomato fruit is softening. Softening is accompanied by hydrolysis of the pectin in the cell wall by pectinases, causing loss of cell adhesion in the middle lamella. One of the most significant pectin-degrading enzymes is polygalacturonase (PG). Previous reports have shown that PG in tomato may exist in different forms (PG1, PG2a, PG2b, and PGx) commonly referred to as PG isoenzymes. The gene product PG2 is differentially glycosylated and is thought to associate with other proteins to form PG1 and PGx. This association is thought to modulate its pectin-degrading activity in planta. An 8 kDa protein that is part of the tomato PG1 multiprotein complex has been isolated, purified, and functionally characterized. This protein, designated 'activator' (ACT), belongs to the class of non-specific lipid transfer proteins (nsLTPs). ACT is capable of 'converting' the gene product PG2 into a more active and heat-stable form, which increases PG-mediated pectin degradation in vitro and stimulates PG-mediated tissue breakdown in planta. This finding suggests a new, not previously identified, function for nsLTPs in the modification of hydrolytic enzyme activity. It is proposed that ACT plays a role in the modulation of PG activity during tomato fruit softening.
NASA Astrophysics Data System (ADS)
Kaur, Gagandeep; Gupta, Shuchi; Sachdeva, Ritika; Dharamvir, Keya
2018-05-01
Adsorption of small gas molecules (such as CO and O2) on pristine graphene (PG) and Li-adsorbed graphene (PG-Li) have been investigated using first principles methods within density functional theory (DFT). We also notice that PG-Li has a higher chemical reactivity towards the gas molecules as compared to PG and these molecules have higher adsorption energy on this surface. Moreover, the strong interactions between PG-Li and the adsorbed molecules (as compared to PG and gas molecules) induce dramatic changes to the electronic properties of PG adsorbed with Li and make PG-Li a promising candidate as sensing material for CO and O2 gases.
A Finite Element Method to Correct Deformable Image Registration Errors in Low-Contrast Regions
Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.
2012-01-01
Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the “demons” registration. For each voxel in the registration’s target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the “demons” algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the “demons” algorithm on the CT images of lung and prostate patients. The performance of the FEM correction relating to the “demons” registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the “demons” registration has the maximum error of 1.2 cm, which can be corrected by the FEM method to 0.4 cm, and the average error of the “demons” registration is reduced from 0.17 cm to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the “demons” algorithm were found unrealistic at several places. In these places, the displacement differences between the “demons” registrations and their FEM corrections were found in the range of 0.4 cm and 1.1cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 minutes of computation time on a 2.6 GH computer. This study has demonstrated that the finite element method can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions. PMID:22581269
Zhang, Mei; Zhu, Lin; Cui, Steve W; Wang, Qi; Zhou, Ting; Shen, Hengsheng
2011-01-01
Fractionation and purification of mushroom polysaccharides is a critical process for mushroom clinical application. After a hot-water treatment, the crude Pleurotus geesteranus (PG) was further fractionated into four fractions (PG-1, -2, -3, -4) using gradient precipitation with water and ammonia sulphate. By controlling the initial polymer concentration and ratio of solvents, this process produced PG fractions with high chemical uniformity and narrow Mw distribution without free proteins. Structurally, PG-1 and PG-2 are pure homopolysaccharide mainly composed of glucose; and PG-3 and PG-4 are heteropolysaccharide-protein complexes. PG-2, a high M(w) fraction mainly composed of glucose presented significant cytotoxicity at the concentration of 200 and 100 μg/ml to human breast cancer cells. Here, we report a new mushroom polysaccharides extraction and fractionation method, with which we produced four fractions of PG with PG-2 appearing effective anti-tumour activity. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Cervone, Felice; De Lorenzo, Giulia; Degrà, Luisa; Salvi, Giovanni; Bergami, Mario
1987-01-01
Homogeneous endo-polygalacturonase (PG) was covalently bound to cyanogen-bromide-activated Sepharose, and the resulting PG-Sepharose conjugate was utilized to purify, by affinity chromatography, a protein from Phaseolus vulgaris hypocotyls that binds to and inhibits PG. Isoelectric focusing of the purified PG-inhibiting protein (PGIP) showed a major protein band that coincided with PG-inhibiting activity. PGIP formed a complex with PG at pH 5.0 and at low salt concentrations. The complex dissociated in 0.5 m Na-acetate and pH values lower than 4.5 or higher than 6.0. Formation of the PG-PGIP complex resulted in complete inhibition of PG activity. PG activity was restored upon dissociation of the complex. The protein exhibited inhibitory activity toward PGs from Colletotrichum lindemuthianum, Fusarium moniliforme and Aspergillus niger. The possible role of PGIP in regulating the activity of fungal PG's and their ability to elicit plant defense reactions are discussed. Images Fig. 3 PMID:16665751
46 CFR 52.01-120 - Safety valves and safety relief valves (modifies PG-67 through PG-73).
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Safety valves and safety relief valves (modifies PG-67 through PG-73). 52.01-120 Section 52.01-120 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-120 Safety valves and safety relief valves (modifies PG-67 through PG-73). (a)...
46 CFR 52.01-120 - Safety valves and safety relief valves (modifies PG-67 through PG-73).
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Safety valves and safety relief valves (modifies PG-67 through PG-73). 52.01-120 Section 52.01-120 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-120 Safety valves and safety relief valves (modifies PG-67 through PG-73). (a)...
46 CFR 52.01-120 - Safety valves and safety relief valves (modifies PG-67 through PG-73).
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Safety valves and safety relief valves (modifies PG-67 through PG-73). 52.01-120 Section 52.01-120 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-120 Safety valves and safety relief valves (modifies PG-67 through PG-73). (a)...
46 CFR 52.01-120 - Safety valves and safety relief valves (modifies PG-67 through PG-73).
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Safety valves and safety relief valves (modifies PG-67 through PG-73). 52.01-120 Section 52.01-120 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-120 Safety valves and safety relief valves (modifies PG-67 through PG-73). (a)...
46 CFR 52.01-120 - Safety valves and safety relief valves (modifies PG-67 through PG-73).
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Safety valves and safety relief valves (modifies PG-67 through PG-73). 52.01-120 Section 52.01-120 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING POWER BOILERS General Requirements § 52.01-120 Safety valves and safety relief valves (modifies PG-67 through PG-73). (a)...
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
A survey on keeler’s theorem and application of symmetric group for swapping game
NASA Astrophysics Data System (ADS)
Pratama, Yohanssen; Prakasa, Yohenry
2017-01-01
An episode of Futurama features two-body mind-switching machine which will not work more than once on the same pair of bodies. The problem is “Can the switching be undone so as to restore all minds to their original bodies?” Ken Keeler found an algorithm that undoes any mind-scrambling permutation, and Lihua Huang found the refinement of it. We look on the process how the puzzle can be modeled in terms group theory and using symmetric group to solve it and find the most efficient way of it. After that we will try to build the algorithm to implement it into the computer program and see the effect of the transposition notion into the algorithm complexity. The number of steps that given by the algorithm will be different and one of algorithms will have the advantage in terms of efficiency. We compare Ken Keeler and Lihua Huang algorithms to see is there any difference if we run it in the computer program, although the complexity could be remain the same.
Assessing nutritional status in cancer: role of the Patient-Generated Subjective Global Assessment.
Jager-Wittenaar, Harriët; Ottery, Faith D
2017-09-01
The Scored Patient-Generated Subjective Global Assessment (PG-SGA) is used internationally as the reference method for proactive risk assessment (screening), assessment, monitoring and triaging for interventions in patients with cancer. This review aims to explain the rationale behind and data supporting the PG-SGA, and to provide an overview of recent developments in the utilization of the PG-SGA and the PG-SGA Short Form. The PG-SGA was designed in the context of a paradigm known as 'anabolic competence'. Uniquely, the PG-SGA evaluates the patient's status as a dynamic rather than static process. The PG-SGA has received new attention, particularly as a screening instrument for nutritional risk or deficit, identifying treatable impediments and guiding patients and professionals in triaging for interdisciplinary interventions. The international use of the PG-SGA indicates a critical need for high-quality and linguistically validated translations of the PG-SGA. As a 4-in-1 instrument, the PG-SGA can streamline clinic work flow and improve the quality of interaction between the clinician and the patient. The availability of multiple high-quality language versions of the PG-SGA enables the inclusion of the PG-SGA in international multicenter studies, facilitating meta-analysis and benchmarking across countries.
T lymphocyte activation and cytokine expression in periapical granulomas and radicular cysts.
Ihan Hren, N; Ihan, A
2009-02-01
Radicular cysts (RCs) are periapical lesions resulting in jaw bone destruction. The inflammatory dental periapical granuloma (PG) is considered to be the origin of RC formation; however the mechanism of RC development remains unclear. Cell suspension from the surgically extirpated tissue of 27 RCs and 25 PGs was obtained. Bacteriological analysis of the PG tissue samples was performed in order to define two major groups of PG according to the prevailing causative bacterial infection: the streptococcal PG (PG-S, n=10) and the anaerobe PG (PG-A, n=9) group. The inflammatory response of tissue infiltrating lymphocytes was assessed by following T lymphocyte activation (HLA-DR expression) as well as interferon gamma (IFN-gamma) and interleukin 4 (IL-4) production which were evaluated by the flow cytometry. In comparison to RC both types of PG contained a higher proportion of activated T cells (HLA-DR) and lower proportion of IL-4 producing cells. PG-A tissue contained increased percentage of CD3 cells and increased percentage of T helper 1 (Th1) cells in comparison with PG-S. In RC the IFN-gamma production is higher than in streptococcal PG-S but similar as in PG-A. Tissue infiltration by Th2 cells and IL-4 production is likely to play an etiopathogenic role in RC formation.
Modeling the blockage of Lg waves from 3-D variations in crustal structure
NASA Astrophysics Data System (ADS)
Sanborn, Christopher J.; Cormier, Vernon F.
2018-05-01
Comprised of S waves trapped in Earth's crust, the high frequency (2-10 Hz) Lg wave is important to discriminating earthquakes from explosions by comparing its amplitude and waveform to those of Pg and Pn waves. Lateral variations in crustal structure, including variations in crustal thickness, intrinsic attenuation, and scattering, affect the efficiency of Lg propagation and its consistency as a source discriminant at regional (200-1500 km) distances. To investigate the effects of laterally varying Earth structure on the efficiency of propagation of Lg and Pg, we apply a radiative transport algorithm to model complete, high-frequency (2-4 Hz), regional coda envelopes. The algorithm propagates packets of energy with ray theory through large-scale 3-D structure, and includes stochastic effects of multiple-scattering by small-scale heterogeneities within the large-scale structure. Source-radiation patterns are described by moment tensors. Seismograms of explosion and earthquake sources are synthesized in canonical models to predict effects on waveforms of paths crossing regions of crustal thinning (pull-apart basins and ocean/continent transitions) and thickening (collisional mountain belts), For paths crossing crustal thinning regions, Lg is amplified at receivers within the thinned region but strongly disrupted and attenuated at receivers beyond the thinned region. For paths crossing regions of crustal thickening, Lg amplitude is attenuated at receivers within the thickened region, but experiences little or no reduction in amplitude at receivers beyond the thickened region. The length of the Lg propagation within a thickened region and the complexity of over- and under-thrust crustal layers, can produce localized zones of Lg amplification or attenuation. Regions of intense scattering within laterally homogeneous models of the crust increase Lg attenuation but do not disrupt its coda shape.
Shared Genetic Contributions to Anxiety Disorders and Pathological Gambling in a Male Population
Giddens, Justine L.; Xian, Hong; Scherrer, Jeffrey F.; Eisen, Seth A.; Potenza, Marc N.
2013-01-01
Background Pathological gambling (PG) frequently co-occurs with anxiety disorders. However, the extent to which the co-occurrence is related to genetic or environmental factors across PG and anxiety disorders is not known. Method Data from the Vietnam Twin Registry (n=7869, male twins) were examined in bivariate models to estimate genetic and shared and unique environmental contributions to PG and generalized anxiety disorder (GAD) and PG and panic disorder (PD). Results While both genetic and unique environmental factors contributed individually to PG, GAD, and PD, the best fitting model indicated that the relationship between PG and GAD was attributable predominantly to shared genetic contributions (ra =0.53). In contrast, substantial correlations were observed between both the genetic (ra=0.34) and unique environmental (re =0.31) contributions to PG and PD. Limitations Results may be limited to middle aged males. Conclusions The existence of shared genetic contributions between PG and both GAD and PD suggest that specific genes, perhaps those involved in affect regulation or stress responsiveness, contribute to PG and anxiety disorders. Overlapping environmental contributions to the co-occurrence of PG and PD suggest that common life experiences (e.g., early life trauma) contribute to both PG and PD. Conversely, the data suggest that distinct environmental factors contribute to PG and GAD (e.g., early onset of gambling in PG). Future studies should examine the relationship between PG and anxiety disorders amongst other populations (women, adolescents) to identify specific genetic and environmental influences that account for the manifestation of these disorders and their co-occurrences. PMID:21481943
Shared genetic contributions to anxiety disorders and pathological gambling in a male population.
Giddens, Justine L; Xian, Hong; Scherrer, Jeffrey F; Eisen, Seth A; Potenza, Marc N
2011-08-01
Pathological gambling (PG) frequently co-occurs with anxiety disorders. However, the extent to which the co-occurrence is related to genetic or environmental factors across PG and anxiety disorders is not known. Data from the Vietnam Era Twin Registry (n=7869, male twins) were examined in bivariate models to estimate genetic and shared and unique environmental contributions to PG and generalized anxiety disorder (GAD) and PG and panic disorder (PD). While both genetic and unique environmental factors contributed individually to PG, GAD, and PD, the best fitting model indicated that the relationship between PG and GAD was attributable predominantly to shared genetic contributions (r(A)=0.53). In contrast, substantial correlations were observed between both the genetic (r(A)=0.34) and unique environmental (r(E)=0.31) contributions to PG and PD. Results may be limited to middle aged males. The existence of shared genetic contributions between PG and both GAD and PD suggests that specific genes, perhaps those involved in affect regulation or stress responsiveness, contribute to PG and anxiety disorders. Overlapping environmental contributions to the co-occurrence of PG and PD suggest that common life experiences (e.g., early life trauma) contribute to both PG and PD. Conversely, the data suggest that distinct environmental factors contribute to PG and GAD (e.g., early onset of gambling in PG). Future studies should examine the relationship between PG and anxiety disorders amongst other populations (women and adolescents) to identify specific genetic and environmental influences that account for the manifestation of these disorders and their co-occurrences. Copyright © 2011. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Mansberger, Steven L; Menda, Shivali A; Fortune, Brad A; Gardiner, Stuart K; Demirel, Shaban
2017-02-01
To characterize the error of optical coherence tomography (OCT) measurements of retinal nerve fiber layer (RNFL) thickness when using automated retinal layer segmentation algorithms without manual refinement. Cross-sectional study. This study was set in a glaucoma clinical practice, and the dataset included 3490 scans from 412 eyes of 213 individuals with a diagnosis of glaucoma or glaucoma suspect. We used spectral domain OCT (Spectralis) to measure RNFL thickness in a 6-degree peripapillary circle, and exported the native "automated segmentation only" results. In addition, we exported the results after "manual refinement" to correct errors in the automated segmentation of the anterior (internal limiting membrane) and the posterior boundary of the RNFL. Our outcome measures included differences in RNFL thickness and glaucoma classification (i.e., normal, borderline, or outside normal limits) between scans with automated segmentation only and scans using manual refinement. Automated segmentation only resulted in a thinner global RNFL thickness (1.6 μm thinner, P < .001) when compared to manual refinement. When adjusted by operator, a multivariate model showed increased differences with decreasing RNFL thickness (P < .001), decreasing scan quality (P < .001), and increasing age (P < .03). Manual refinement changed 298 of 3486 (8.5%) of scans to a different global glaucoma classification, wherein 146 of 617 (23.7%) of borderline classifications became normal. Superior and inferior temporal clock hours had the largest differences. Automated segmentation without manual refinement resulted in reduced global RNFL thickness and overestimated the classification of glaucoma. Differences increased in eyes with a thinner RNFL thickness, older age, and decreased scan quality. Operators should inspect and manually refine OCT retinal layer segmentation when assessing RNFL thickness in the management of patients with glaucoma. Copyright © 2016 Elsevier Inc. All rights reserved.
Automated Scoring of Short-Answer Reading Items: Implications for Constructs
ERIC Educational Resources Information Center
Carr, Nathan T.; Xi, Xiaoming
2010-01-01
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Data Mining Tools Make Flights Safer, More Efficient
NASA Technical Reports Server (NTRS)
2014-01-01
A small data mining team at Ames Research Center developed a set of algorithms ideal for combing through flight data to find anomalies. Dallas-based Southwest Airlines Co. signed a Space Act Agreement with Ames in 2011 to access the tools, helping the company refine its safety practices, improve its safety reviews, and increase flight efficiencies.
Conceptual issues of softcopy photogrammetric workstations
NASA Technical Reports Server (NTRS)
Schenk, Toni; Toth, Charles K.
1992-01-01
A conceptual approach to digital photogrammetry is presented. Automation of photogrammetric processes on digital photogrammetric workstations is considered with particular attention given to the automatic orientation and the surface reconstruction module. It is suggested that major progress toward autonomous softcopy workstations depends more on advances on the conceptual level rather than on refinement of system components such as hardware and algorithms.
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
ERIC Educational Resources Information Center
Weaver, J. Fred
Refinements of work with calculator algorithms previously conducted by the author are reported. Work with "chaining" and the doing/undoing property in addition and subtraction was tested with 24 third-grade students. Results indicated the need for further instruction with both ideas. Students were able to manipulate the calculator keyboard, but…
Fabrication and Vibration Results of 30-cm Pyrolytic Graphite Ion Optics
NASA Technical Reports Server (NTRS)
DePano, Michael K.; Hart, Stephen L.; Hanna, Andrew A.; Schneider, Analyn C.
2004-01-01
Boeing Electron Dynamic Devices, Inc. is currently developing pyrolytic graphite (PG) grids designed to operate on 30-cm NSTAR-type thrusters for the Carbon Based Ion Optics (CBIO) program. The PG technology effort of the CBIO program aims to research PG as a flightworthy material for use in dished ion optics by designing, fabricating, and performance testing 30-cm PG grids. As such, PG grid fabrication results will be discussed as will PG design considerations and how they must differ from the NSTAR molybdenum grid design. Surface characteristics and surface processing of PG will be explored relative to effects on voltage breakdown. Part of the CBIO program objectives is to understand the erosion of PG due to Xenon ion bombardment. Discussion of PG and CC sputter yields will be presented relative to molybdenum. These sputter yields will be utilized in the life modeling of carbon-based grids. Finally, vibration results of 30-cm PG grids will be presented and compared to a first-order model generated at Boeing EDD. Performance testing results of the PG grids will not be discussed in this paper as it has yet to be completed.
Simone, G; Paradiso, A; Cirillo, R; Mangia, A; Rella, G; Wiesel, S; Petroni, S; De Benedictis, G; De Lena, M
1991-01-01
Recently, a method similar to ER.ICA has been proposed for the progesterone receptor (PgR) using two monoclonal antibodies, JZB39 and KD68, specific for human PgR and characterized by a molecular weight of 95 and 120 Kd, respectively. A series of 73 breast cancer patients was studied with regards to ER and PgR using both immunocytochemical (ICA) and biochemical (DCC) assays. Results showed no substantial differences between the two methods when considering common clinical-pathological parameters. Overall agreement between ICA and DCC methods was found: 79% for PgR and 78% for ER. A slight quantitative correlation was also observed between the "score values" of the ICA method and the Fmol content of ER and PgR using the Brave-Pearson test (r = 0.49 for PgR; r = 0.43 for ER). Specificity of PgR.ICA method was 77% for PgR and 72% for ER; sensitivity was 82% and 83%, respectively. The ICA method is a reliable technique to assess PgR presence as well as ER. Further studies are necessary to evaluate the prognostic role of nuclear PgR.
NASA Astrophysics Data System (ADS)
Guo, Yuan; Zeng, Xiaoqing; Yuan, Haiyan; Huang, Yunmei; Zhao, Yanmei; Wu, Huan; Yang, Jidong
2017-08-01
In this study, a novel method for chiral recognition of phenylglycinol (PG) enantiomers was proposed. Firstly, water-soluble N-acetyl-L-cysteine (NALC)-capped CdTe quantum dots (QDs) were synthesized and experiment showed that the fluorescence intensity of the reaction system slightly enhancement when added PG enantiomers to NALC-capped CdTe quantum dots (QDs), but the R-PG and S-PG could not be distinguished. Secondly, when there was Ag+ presence in the reaction system, the experiment result was extremely interesting, the PG enantiomers cloud make NALC-capped CdTe QDs produce different fluorescence signal, in which the fluorescence of S-PG + Ag+ + NALC-CdTe system was significantly enhanced, and the fluorescence of R-PG + Ag+ + NALC-CdTe system was markedly decreased. Thirdly, all the enhanced and decreased of the fluorescence intensity were directly proportional to the concentration of R-PG and S-PG in the linearly range 10- 5-10- 7 mol·L- 1, respectively. So, the new method for simultaneous determination of the PG enantiomers was built too. The experiment result of the method was satisfactory with the detection limit of PG can reached 10- 7 mol·L- 1 and the related coefficient of S-PG and R-PG are 0.995 and 0.980, respectively. The method was highly sensitive, selective and had wider detection range compared with other methods.
Gawroński, Wojciech; Sobiecka, Joanna
2015-11-22
Medical care in disabled sports is crucial both as prophylaxis and as ongoing medical intervention. The aim of this paper was to present changes in the quality of medical care over the consecutive Paralympic Games (PG). The study encompassed 31 paralympians: Turin (11), Vancouver (12), and Sochi (8) competing in cross-country skiing, alpine skiing, biathlon and snowboarding. The first, questionnaire-based, part of the study was conducted in Poland before the PG. The athletes assessed the quality of care provided by physicians, physiologists, dieticians, and physiotherapists, as well as their cooperation with the massage therapist and the psychologist. The other part of the study concerned the athletes' health before leaving for the PG, as well as their diseases and injuries during the PG. The quality of medical care was poor before the 2006 PG, but satisfactory before the subsequent PG. Only few athletes made use of psychological support, assessing it as poor before the 2006 PG and satisfactory before the 2010 and 2014 PG. The athletes' health condition was good during all PG. The health status of cross-country skiers was confirmed by a medical fitness certificate before all PG, while that of alpine skiers only before the 2014 PG. There were no serious diseases; training injuries precluded two athletes from participation. The quality of medical care before the PG was poor, however, became satisfactory during the actual PG. The resulting ad hoc pattern deviates from the accepted standards in medical care in disabled sports.
Yu, M; Qi, R; Chen, C; Yin, J; Ma, S; Shi, W; Wu, Y; Ge, J; Jiang, Y; Tang, L; Xu, Y; Li, Y
2017-02-01
The aims of this study were to develop an effective oral vaccine against enterotoxigenic Escherichia coli (ETEC) infection and to design new and more versatile mucosal adjuvants. Genetically engineered Lactobacillus casei strains expressing F4 (K88) fimbrial adhesin FaeG (rLpPG-2-FaeG) and either co-expressing heat-labile enterotoxin A (LTA) subunit with an amino acid mutation associated with reduced virulence (LTAK63) and a heat-labile enterotoxin B (LTB) subunit of E. coli (rLpPG-2-LTAK63-co-LTB) or fused-expressing LTAK63 and LTB (rLpPG-2-LTAK63-fu-LTB) were constructed. The immunogenicity of rLpPG-2-FaeG in conjunction with rLpPG-2-LTAK63-co-LTB or rLpPG-2-LTAK63-fu-LTB as an orally administered mucosal adjuvant in mice was evaluated. Results showed that the levels of FaeG-specific serum IgG and mucosal sIgA, as well as the proliferation of lymphocytes, were significantly higher in mice orally co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-fu-LTB compared with those administered rLpPG-2-FaeG alone, and were lower than those co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-co-LTB. Moreover, effective protection was observed after challenge with F4+ ETEC strain CVCC 230 in mice co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-co-LTB or rLpPG-2-FaeG and rLpPG-2-LTAK63-fu-LTB group compared with those that received rLpPG-2-FaeG alone. rLpPG-2-FaeG showed greater immunogenicity in combination with LTAK63 and LTB as molecular adjuvants. Recombinant Lactobacillus provides a promising platform for the development of vaccines against F4+ ETEC infection. © 2016 The Society for Applied Microbiology.
Self-Avoiding Walks over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.
Improving consensus structure by eliminating averaging artifacts
KC, Dukka B
2009-01-01
Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905
The use of prostaglandins in controlling estrous cycle of the ewe: a review.
Fierro, Sergio; Gil, Jorge; Viñoles, Carolina; Olivera-Muzante, Julio
2013-02-01
This review considers the use of prostaglandin F(2α) and its synthetic analogues (PG) for controlling the estrous cycle of the ewe. Aspects such as phase of the estrus cycle, PG analogues, PG doses, ovarian follicle development pattern, CL formation, progesterone synthesis, ovulation rate, sperm transport, embryo quality, and fertility rates after PG administration are reviewed. Furthermore, protocols for estrus synchronization and their success in timed AI programs are discussed. Based on available information, the ovine CL is refractory to PG treatment for up to 2 days after ovulation. All PG analogues are effective when an appropriate dose is given; in that regard, there is a positive association between the dose administered and the proportion of ewes detected in estrus. Follicular response after PG is dependent on the phase of the estrous cycle at treatment. Altered sperm transport and low pregnancy rates are generally reported. However, reports on alteration of the steroidogenic capacity of preovulatory follicles, ovulation rate, embryo quality, recovery rates, and prolificacy, are controversial. Although various PG-based protocols can be used for estrus synchronization, a second PG injection improves estrus response when the stage of the estrous cycle at the first injection is unknown. The estrus cycle after PG administration has a normal length. Prostaglandin-based protocols for timed AI achieved poor reproductive outcomes, but increasing the interval between PG injections might increase pregnancy rates. Attempts to improve reproductive outcomes have been directed to provide a synchronized LH surge: use of different routes of AI (cervical or intrauterine), different PG doses, and increased intervals between PG injections. Finally we present our point of view regarding future perspectives on the use of PG in programs of controlled sheep reproduction. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Song, Xiaoling; Zhang, Yue; Wei, Song; Huang, Jie
2013-03-01
The effects of different hydrolysis methods on peptidoglycan (PG) were assessed in terms of their impact on the innate immunity and disease resistance of Pacific white shrimp, Litop enaeus vannamei. PG derived from Bifidobacterium thermophilum was prepared in the laboratory and processed with lysozyme and protease under varying conditions to produce several different PG preparations. A standard shrimp feed was mixed with 0.05% PG preparations to produce a number of experimental diets for shrimp. The composition, concentration, and molecular weight ranges of the soluble PG were analyzed. Serum phenoloxidase and acid phosphatase activity in the shrimp were determined on Days 6—31 of the experiment. The protective activity of the PG preparations was evaluated by exposing shrimp to white spot syndrome virus (WSSV). Data on the composition of the PG preparations indicated that preparations hydrolyzed with lysozyme for 72 h had more low-molecular-weight PG than those treated for 24 h, and hydrolysis by protease enhanced efficiency of hydrolysis compared to lysozyme. SDS-PAGE showed changes in the molecular weight of the soluble PG produced by the different hydrolysis methods. Measurements of serum phenoloxidase and acid phosphatase activity levels in the shrimp indicated that the PG preparations processed with enzymes were superior to the preparation which had not undergone hydrolysis in enhancing the activity of the two serum enzymes. In addition, the preparation containing more low-molecular-weight PG enhanced the resistance of the shrimp to WSSV, whereas no increased resistance was observed for preparations containing less low-molecular-weight PG. These findings suggest that the immunity-enhancing activity of PG is related to its molecular weight and that increasing the quantity of low-molecular-weight PG can fortify the effect of immunity enhancement.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Heterogeneity of Loss Aversion in Pathological Gambling.
Takeuchi, Hideaki; Kawada, Ryosaku; Tsurumi, Kosuke; Yokoyama, Naoto; Takemura, Ariyoshi; Murao, Takuro; Murai, Toshiya; Takahashi, Hidehiko
2016-12-01
Pathological gambling (PG) is characterized by continual repeated gambling behavior despite negative consequences. PG is considered to be a disorder of altered decision-making under risk, and behavioral economics tools were utilized by studies on decision-making under risk. At the same time, PG was suggested to be a heterogeneous disorder in terms of personality traits as well as risk attitude. We aimed to examine the heterogeneity of PG in terms of loss aversion, which means that a loss is subjectively felt to be larger than the same amount of gain. Thirty-one male PG subjects and 26 male healthy control (HC) subjects underwent a behavioral economics task for estimation of loss aversion and personality traits assessment. Although loss aversion in PG subjects was not significantly different from that in HC subjects, distributions of loss aversion differed between PG and HC subjects. HC subjects were uniformly classified into three levels (low, middle, high) of loss aversion, whereas PG subjects were mostly classified into the two extremes, and few PG subjects were classified into the middle range. PG subjects with low and high loss aversion showed a significant difference in anxiety, excitement-seeking and craving intensity. Our study suggested that PG was a heterogeneous disorder in terms of loss aversion. This result might be useful for understanding cognitive and neurobiological mechanisms and the establishment of treatment strategies for PG.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
Topological quantum computation of the Dold-Thom functor
NASA Astrophysics Data System (ADS)
Ospina, Juan
2014-05-01
A possible topological quantum computation of the Dold-Thom functor is presented. The method that will be used is the following: a) Certain 1+1-topological quantum field theories valued in symmetric bimonoidal categories are converted into stable homotopical data, using a machinery recently introduced by Elmendorf and Mandell; b) we exploit, in this framework, two recent results (independent of each other) on refinements of Khovanov homology: our refinement into a module over the connective k-theory spectrum and a stronger result by Lipshitz and Sarkar refining Khovanov homology into a stable homotopy type; c) starting from the Khovanov homotopy the Dold-Thom functor is constructed; d) the full construction is formulated as a topological quantum algorithm. It is conjectured that the Jones polynomial can be described as the analytical index of certain Dirac operator defined in the context of the Khovanov homotopy using the Dold-Thom functor. As a line for future research is interesting to study the corresponding supersymmetric model for which the Khovanov-Dirac operator plays the role of a supercharge.
The GOES-R Product Generation Architecture - Post CDR Update
NASA Astrophysics Data System (ADS)
Dittberner, G.; Kalluri, S.; Weiner, A.
2012-12-01
The GOES-R system will substantially improve the accuracy of information available to users by providing data from significantly enhanced instruments, which will generate an increased number and diversity of products with higher resolution, and much shorter relook times. Considerably greater compute and memory resources are necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
The GOES-R Product Generation Architecture
NASA Astrophysics Data System (ADS)
Dittberner, G. J.; Kalluri, S.; Hansen, D.; Weiner, A.; Tarpley, A.; Marley, S.
2011-12-01
The GOES-R system will substantially improve users' ability to succeed in their work by providing data with significantly enhanced instruments, higher resolution, much shorter relook times, and an increased number and diversity of products. The Product Generation architecture is designed to provide the computer and memory resources necessary to achieve the necessary latency and availability for these products. Over time, new and updated algorithms are expected to be added and old ones removed as science advances and new products are developed. The GOES-R GS architecture is being planned to maintain functionality so that when such changes are implemented, operational product generation will continue without interruption. The primary parts of the PG infrastructure are the Service Based Architecture (SBA) and the Data Fabric (DF). SBA is the middleware that encapsulates and manages science algorithms that generate products. It is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DF to provide this data communication layer between algorithms. The DF provides an abstract interface over a distributed and persistent multi-layered storage system (e.g., memory based caching above disk-based storage) and an event management system that allows event-driven algorithm services to know when instrument data are available and where they reside. Together, the SBA and the DF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
Discovery and Asteroseismological Analysis of the Pulsating sdB Star PG 0014+067
NASA Astrophysics Data System (ADS)
Brassard, P.; Fontaine, G.; Billères, M.; Charpinet, S.; Liebert, James; Saffer, R. A.
2001-12-01
We report the discovery of low-amplitude, short-period, multiperiodic luminosity variations in the hot B subdwarf PG 0014+067. This star was selected as a potential target in the course of our ongoing survey to search for pulsators of the EC 14026 type. Our model atmosphere analysis of the time-averaged Multiple Mirror Telescope (MMT) optical spectrum of PG 0014+067 indicates that this star has Teff=33,550+/-380 K and logg=5.77+/-0.10, which places it right in the middle of the theoretical EC 14026 instability region in the logg-Teff plane. A standard analysis of our Canada-France-Hawaii Telescope (CFHT) light curve reveals the presence of at least 13 distinct harmonic oscillations with periods in the range 80-170 s. Fine structure (closely spaced frequency doublets) is observed in three of these oscillations, and five high-frequency peaks due to nonlinear cross frequency superpositions of the basic oscillations are also possibly seen in the Fourier spectrum. The largest oscillation has an amplitude ~=0.22% of the mean brightness of the star, making PG 0014+067 the EC 14026 star with the smallest intrinsic amplitudes so far. On the basis of the 13 observed periods, we carry out a detailed asteroseismological analysis of the data starting with an extensive search in parameter space for a model that could account for the observations. To make this search efficient, objective, and reliable, we use a newly developed period matching technique based on an optimization algorithm. This search leads to a model that can account remarkably well for the 13 observed periods in the light curve of PG 0014+067. A detailed comparison of the theoretical period spectrum of this optimal model with the distribution of the 13 observed periods leads to the realization that 10 other pulsations, with lower amplitudes than the threshold value used in our standard analysis, are probably present in the light curve of PG 0014+067. Altogether, we tentatively identify 23 distinct pulsation modes in our target star (counting the frequency doublets referred to above as single modes). These are all low-order acoustic modes with adjacent values of k and with l=0, 1, 2, and 3. They define a band of unstable periods, in close agreement with nonadiabatic pulsation theory. Furthermore, the average relative dispersion between the 23 observed periods and the periods of the corresponding 23 theoretical modes of the optimal model is only ~=0.8%, a remarkable achievement by asteroseismological standards. On the basis of our analysis, we infer that the global structural parameters of PG 0014+067 are logg=5.780+/-0.008, Teff=34,500K+/-2690 K, M*/Msolar=0.490+/-0.019, log(Menv/M*)=-4.31+/-0.22, and R/Rsolar=0.149+/-0.004. If we combine these estimates of the surface gravity, total mass, and radius with our value of the spectroscopic temperature (which is more accurately evaluated than its asteroseismological counterpart, in direct contrast to the surface gravity), we also find that PG 0014+067 has a luminosity L/Lsolar=25.5+/-2.5, has an absolute visual magnitude MV=4.48+/-0.12, and is located at a distance d=1925+/-195 pc (using V=15.9+/-0.1). If we interpret the fine structure (frequency doublets) observed in three of the 23 pulsations in terms of rotational splitting, we further find that PG 0014+067 rotates with a period of 29.2+/-0.9 hr and has a maximum rotational broadening velocity of Vsini<~6.2+/-0.4 km s-1. Based on observations gathered at the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France, and the University of Hawaii.
Chai, Hann-Juang; Kiew, Lik-Voon; Chin, Yunni; Norazit, Anwar; Mohd Noor, Suzita; Lo, Yoke-Lin; Looi, Chung-Yeng; Lau, Yeh-Siang; Lim, Tuck-Meng; Wong, Won-Fen; Abdullah, Nor Azizan; Abdul Sattar, Munavvar Zubaid; Johns, Edward J; Chik, Zamri; Chung, Lip-Yong
2017-01-01
Poly-l-glutamic acid (PG) has been used widely as a carrier to deliver anticancer chemotherapeutics. This study evaluates PG as a selective renal drug carrier. 3 H-deoxycytidine-labeled PGs (17 or 41 kDa) and 3 H-deoxycytidine were administered intravenously to normal rats and streptozotocin-induced diabetic rats. The biodistribution of these compounds was determined over 24 h. Accumulation of PG in normal kidneys was also tracked using 5-(aminoacetamido) fluorescein (fluoresceinyl glycine amide)-labeled PG (PG-AF). To evaluate the potential of PGs in ferrying renal protective anti-oxidative stress compounds, the model drug 4-(2-aminoethyl)benzenesulfonyl fluoride hydrochloride (AEBSF) was conjugated to 41 kDa PG to form PG-AEBSF. PG-AEBSF was then characterized and evaluated for intracellular anti-oxidative stress efficacy (relative to free AEBSF). In the normal rat kidneys, 17 kDa radiolabeled PG (PG-Tr) presents a 7-fold higher, while 41 kDa PG-Tr shows a 15-fold higher renal accumulation than the free radiolabel after 24 h post injection. The accumulation of PG-AF was primarily found in the renal tubular tissues at 2 and 6 h after an intravenous administration. In the diabetic (oxidative stress-induced) kidneys, 41 kDa PG-Tr showed the greatest renal accumulation of 8-fold higher than the free compound 24 h post dose. Meanwhile, the synthesized PG-AEBSF was found to inhibit intracellular nicotinamide adenine dinucleotide phosphate oxidase (a reactive oxygen species generator) at an efficiency that is comparable to that of free AEBSF. This indicates the preservation of the anti-oxidative stress properties of AEBSF in the conjugated state. The favorable accumulation property of 41 kDa PG in normal and oxidative stress-induced kidneys, along with its capabilities in conserving the pharmacological properties of the conjugated renal protective drugs, supports its role as a potential renal targeting drug carrier.
Figueroa, Melania; Alderman, Stephen; Garvin, David F.; Pfender, William F.
2013-01-01
Puccinia graminis causes stem rust, a serious disease of cereals and forage grasses. Important formae speciales of P. graminis and their typical hosts are P. graminis f. sp. tritici (Pg-tr) in wheat and barley, P. graminis f. sp. lolii (Pg-lo) in perennial ryegrass and tall fescue, and P. graminis f. sp. phlei-pratensis (Pg-pp) in timothy grass. Brachypodium distachyon is an emerging genetic model to study fungal disease resistance in cereals and temperate grasses. We characterized the P. graminis-Brachypodium pathosystem to evaluate its potential for investigating incompatibility and non-host resistance to P. graminis. Inoculation of eight Brachypodium inbred lines with Pg-tr, Pg-lo or Pg-pp resulted in sporulating lesions later accompanied by necrosis. Histological analysis of early infection events in one Brachypodium inbred line (Bd1-1) indicated that Pg-lo and Pg-pp were markedly more efficient than Pg-tr at establishing a biotrophic interaction. Formation of appressoria was completed (60–70% of germinated spores) by 12 h post-inoculation (hpi) under dark and wet conditions, and after 4 h of subsequent light exposure fungal penetration structures (penetration peg, substomatal vesicle and primary infection hyphae) had developed. Brachypodium Bd1-1 exhibited pre-haustorial resistance to Pg-tr, i.e. infection usually stopped at appressorial formation. By 68 hpi, only 0.3% and 0.7% of the Pg-tr urediniospores developed haustoria and colonies, respectively. In contrast, development of advanced infection structures by Pg-lo and Pg-pp was significantly more common; however, Brachypodium displayed post-haustorial resistance to these isolates. By 68 hpi the percentage of urediniospores that only develop a haustorium mother cell or haustorium in Pg-lo and Pg-pp reached 8% and 5%, respectively. The formation of colonies reached 14% and 13%, respectively. We conclude that Brachypodium is an apt grass model to study the molecular and genetic components of incompatiblity and non-host resistance to P. graminis. PMID:23441218
Chai, Hann-Juang; Kiew, Lik-Voon; Chin, Yunni; Norazit, Anwar; Mohd Noor, Suzita; Lo, Yoke-Lin; Looi, Chung-Yeng; Lau, Yeh-Siang; Lim, Tuck-Meng; Wong, Won-Fen; Abdullah, Nor Azizan; Abdul Sattar, Munavvar Zubaid; Johns, Edward J; Chik, Zamri; Chung, Lip-Yong
2017-01-01
Background and purpose Poly-l-glutamic acid (PG) has been used widely as a carrier to deliver anticancer chemotherapeutics. This study evaluates PG as a selective renal drug carrier. Experimental approach 3H-deoxycytidine-labeled PGs (17 or 41 kDa) and 3H-deoxycytidine were administered intravenously to normal rats and streptozotocin-induced diabetic rats. The biodistribution of these compounds was determined over 24 h. Accumulation of PG in normal kidneys was also tracked using 5-(aminoacetamido) fluorescein (fluoresceinyl glycine amide)-labeled PG (PG-AF). To evaluate the potential of PGs in ferrying renal protective anti-oxidative stress compounds, the model drug 4-(2-aminoethyl)benzenesulfonyl fluoride hydrochloride (AEBSF) was conjugated to 41 kDa PG to form PG-AEBSF. PG-AEBSF was then characterized and evaluated for intracellular anti-oxidative stress efficacy (relative to free AEBSF). Results In the normal rat kidneys, 17 kDa radiolabeled PG (PG-Tr) presents a 7-fold higher, while 41 kDa PG-Tr shows a 15-fold higher renal accumulation than the free radiolabel after 24 h post injection. The accumulation of PG-AF was primarily found in the renal tubular tissues at 2 and 6 h after an intravenous administration. In the diabetic (oxidative stress-induced) kidneys, 41 kDa PG-Tr showed the greatest renal accumulation of 8-fold higher than the free compound 24 h post dose. Meanwhile, the synthesized PG-AEBSF was found to inhibit intracellular nicotinamide adenine dinucleotide phosphate oxidase (a reactive oxygen species generator) at an efficiency that is comparable to that of free AEBSF. This indicates the preservation of the anti-oxidative stress properties of AEBSF in the conjugated state. Conclusion/Implications The favorable accumulation property of 41 kDa PG in normal and oxidative stress-induced kidneys, along with its capabilities in conserving the pharmacological properties of the conjugated renal protective drugs, supports its role as a potential renal targeting drug carrier. PMID:28144140
Han, Xiaozhe; LaRosa, Karen B; Kawai, Toshihisa; Taubman, Martin A
2014-01-03
Porphyromonas gingivalis (Pg) is one of a constellation of oral organisms associated with human chronic periodontitis. While adaptive immunity to periodontal pathogen proteins has been investigated and is an important component of periodontal bone resorption, the effect of periodontal pathogen DNA in eliciting systemic and mucosal antibody and modulating immune responses has not been investigated. Rowett rats were locally injected with whole genomic Pg DNA in alum. Escherichia coli (Ec) genomic DNA, Fusobacterium nucleatum (Fn) genomic DNA, and saline/alum injected rats served as controls. After various time points, serum IgG and salivary IgA antibody to Ec, Fn or Pg were detected by ELISA. Serum and salivary antibody reactions with Pg surface antigens were determined by Western blot analyses and the specific antigen was identified by mass spectrometry. Effects of genomic DNA immunization on Pg bacterial colonization and experimental periodontal bone resorption were also evaluated. Sera from Pg DNA, Ec DNA and Fn DNA-injected rats did not react with Ec or Fn bacteria. Serum IgG antibody levels to Pg and Pg surface extracts were significantly higher in animals immunized with Pg DNA as compared to the control groups. Rats injected with Pg DNA demonstrated a strong serum IgG and salivary IgA antibody reaction solely to Pg fimbrillin (41kDa), the major protein component of Pg fimbriae. In the Pg DNA-immunized group, the numbers of Pg bacteria in oral cavity and the extent of periodontal bone resorption were significantly reduced after Pg infection. This study suggests that infected hosts may select specific genes from whole genomic DNA of the periodontal pathogen for transcription and presentation. The results indicate that the unique gene selected can initiate a host protective immune response to the parent bacterium. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ben-Simhon, Zohar; Judeinstein, Sylvie; Nadler-Hassar, Talia; Trainin, Taly; Bar-Ya'akov, Irit; Borochov-Neori, Hamutal; Holland, Doron
2011-11-01
Anthocyanins are the major pigments responsible for the pomegranate (Punica granatum L.) fruit skin color. The high variability in fruit external color in pomegranate cultivars reflects variations in anthocyanin composition. To identify genes involved in the regulation of anthocyanin biosynthesis pathway in the pomegranate fruit skin we have isolated, expressed and characterized the pomegranate homologue of the Arabidopsis thaliana TRANSPARENT TESTA GLABRA1 (TTG1), encoding a WD40-repeat protein. The TTG1 protein is a regulator of anthocyanins and proanthocyanidins (PAs) biosynthesis in Arabidopsis, and acts by the formation of a transcriptional regulatory complex with two other regulatory proteins: bHLH and MYB. Our results reveal that the pomegranate gene, designated PgWD40, recovered the anthocyanin, PAs, trichome and seed coat mucilage phenotype in Arabidopsis ttg1 mutant. PgWD40 expression and anthocyanin composition in the skin were analyzed during pomegranate fruit development, in two accessions that differ in skin color intensity and timing of appearance. The results indicate high positive correlation between the total cyanidin derivatives quantity (red pigments) and the expression level of PgWD40. Furthermore, strong correlation was found between the steady state levels of PgWD40 transcripts and the transcripts of pomegranate homologues of the structural genes PgDFR and PgLDOX. PgWD40, PgDFR and PgLDOX expression also correlated with the expression of pomegranate homologues of the regulatory genes PgAn1 (bHLH) and PgAn2 (MYB). On the basis of our results we propose that PgWD40 is involved in the regulation of anthocyanin biosynthesis during pomegranate fruit development and that expression of PgWD40, PgAn1 and PgAn2 in the pomegranate fruit skin is required to regulate the expression of downstream structural genes involved in the anthocyanin biosynthesis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Yu-Sheng, E-mail: dissertlin@yahoo.com.tw; Yang, Cheng-Hsu, E-mail: yangch@adm.cgmh.org.tw; Chu, Chi-Ming, E-mail: chuchiming@ndmctsgh.edu.tw
Purpose: The severity of residual stenosis (RS) sometimes cannot be accurately measured by angiography during central vein intervention. This study evaluated the role of pullback pressure measurement during central vein stenosis (CVS) intervention. Methods: A retrospective review enrolled 94 consecutive dialysis patients who underwent CVS interventions but not stenting procedures. Patients were classified into 2 groups by either angiography or pressure gradient (PG) criteria, respectively. Groups divided by angiographic result were successful group (RS {<=}30 %) and acceptable group (50 % {>=} RS > 30 %), while groups divided by PG were low PG group (PG {<=}5 mmHg) and highmore » PG group (PG >5 mmHg). Baseline characteristics and 12-month patency rates between the groups were analyzed. Results: The angiography results placed 63 patients in the successful group and 31 patients in the acceptable group. The patency rate at 12 month was not statistically different (P = 0.167). When the patients were reclassified by the postintervention pullback PG, the patency rate at 12 months was significant (P = 0.048). Further analysis in groups redivided by different combinations of RS and PG criteria identified significant differences in the group with both RS {<=}30 % and PG {<=}5 mmHg compared with those with either RS >30 % (P = 0.047) or PG >5 mmHg (P = 0.027). In addition, there was a significant difference between those with both RS {<=}30 % and PG {<=}5 mmHg compared with those with both RS >30 % and PG >5 mmHg (P = 0.027). Conclusion: Postintervention PG can better predict long-term outcomes after angioplasty for CVS in nonstented dialysis patients than angiography.« less
Gawroński, Wojciech; Sobiecka, Joanna
2015-01-01
Medical care in disabled sports is crucial both as prophylaxis and as ongoing medical intervention. The aim of this paper was to present changes in the quality of medical care over the consecutive Paralympic Games (PG). The study encompassed 31 paralympians: Turin (11), Vancouver (12), and Sochi (8) competing in cross-country skiing, alpine skiing, biathlon and snowboarding. The first, questionnaire-based, part of the study was conducted in Poland before the PG. The athletes assessed the quality of care provided by physicians, physiologists, dieticians, and physiotherapists, as well as their cooperation with the massage therapist and the psychologist. The other part of the study concerned the athletes’ health before leaving for the PG, as well as their diseases and injuries during the PG. The quality of medical care was poor before the 2006 PG, but satisfactory before the subsequent PG. Only few athletes made use of psychological support, assessing it as poor before the 2006 PG and satisfactory before the 2010 and 2014 PG. The athletes’ health condition was good during all PG. The health status of cross-country skiers was confirmed by a medical fitness certificate before all PG, while that of alpine skiers only before the 2014 PG. There were no serious diseases; training injuries precluded two athletes from participation. The quality of medical care before the PG was poor, however, became satisfactory during the actual PG. The resulting ad hoc pattern deviates from the accepted standards in medical care in disabled sports. PMID:26834868
Production of Phloroglucinol, a Platform Chemical, in Arabidopsis using a Bacterial Gene.
Abdel-Ghany, Salah E; Day, Irene; Heuberger, Adam L; Broeckling, Corey D; Reddy, Anireddy S N
2016-12-07
Phloroglucinol (1,3,5-trihydroxybenzene; PG) and its derivatives are phenolic compounds that are used for various industrial applications. Current methods to synthesize PG are not sustainable due to the requirement for carbon-based precursors and co-production of toxic byproducts. Here, we describe a more sustainable production of PG using plants expressing a native bacterial or a codon-optimized synthetic PhlD targeted to either the cytosol or chloroplasts. Transgenic lines were analyzed for the production of PG using gas and liquid chromatography coupled to mass spectroscopy. Phloroglucinol was produced in all transgenic lines and the line with the highest PhlD transcript level showed the most accumulation of PG. Over 80% of the produced PG was glycosylated to phlorin. Arabidopsis leaves have the machinery to glycosylate PG to form phlorin, which can be hydrolyzed enzymatically to produce PG. Furthermore, the metabolic profile of plants with PhlD in either the cytosol or chloroplasts was altered. Our results provide evidence that plants can be engineered to produce PG using a bacterial gene. Phytoproduction of PG using a bacterial gene paves the way for further genetic manipulations to enhance the level of PG with implications for the commercial production of this important platform chemical in plants.
Ocak, Tarık; Erdem, Alim; Duran, Arif; Tekelioğlu, Ümit Yaşar; Öztürk, Serkan; Ayhan, Suzi Selim; Özlü, Mehmet Fatih; Tosun, Mehmet; Koçoğlu, Hasan; Yazıcı, Mehmet
2013-01-01
OBJECTIVE: This prospective study investigated the diagnostic significance of the N-terminal pro-brain natriuretic (NT-proBNP) and troponin I peptides in emergency department patients presenting with palpitations. METHODS: Two groups of patients with palpitations but without documented supraventricular tachycardia were compared: a group with supraventricular tachycardia (n = 49) and a control group (n = 47). Both groups were diagnosed using electrophysiological studies during the study period. Blood samples were obtained from all of the patients to determine the NT-proBNP and troponin I levels within the first hour following arrival in the emergency department. RESULT: The mean NT-proBNP levels were 207.74±197.11 in supraventricular tachyarrhythmia group and 39.99±32.83 pg/mL in control group (p<0.001). To predict supraventricular tachycardia, the optimum NT-proBNP threshold was 61.15 pg/mL, as defined by the receiver operating characteristic (ROC) curve, with a non-significant area under the ROC curve of 0.920 (95% CI, 0.86-0.97, p<0.001). The NT-proBNP cut-off for diagnosing supraventricular tachycardia had 81.6% sensitivity and 91.5% specificity. Supraventricular tachycardia was significantly more frequent in the patients with NT-proBNP levels ≥61.15 pg/mL (n = 44, 90.9%, p>0.001). The mean troponin I levels were 0.17±0.56 and 0.01±0.06 pg/mL for the patients with and without supraventricular tachycardia, respectively (p<0.05). Of the 96 patients, 21 (21.87%) had troponin I levels ≥0.01: 2 (4.25%) in the control group and 19 (38.77%) in the supraventricular tachycardia group (p<0.001). CONCLUSION: Troponin I and, in particular, NT-proBNP peptide were helpful for differentiating supraventricular tachycardia from non- supraventricular tachycardia palpitations. Further randomized, large, multicenter trials are needed to define the benefit and diagnostic role of NT-proBNP and troponin I in the management algorithm of patients presenting with palpitations in emergency departments. PMID:23778331
Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena
2016-02-01
Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.
Mechanistic insights into porous graphene membranes for helium separation and hydrogen purification
NASA Astrophysics Data System (ADS)
Wei, Shuxian; Zhou, Sainan; Wu, Zhonghua; Wang, Maohuai; Wang, Zhaojie; Guo, Wenyue; Lu, Xiaoqing
2018-05-01
Porous graphene (PG) and nitrogen-substituted PG monolayers of 3N-PG and 6N-PG were designed as effective membranes for the separation of He and H2 over Ne, Ar, N2, CO, and CH4 by using density functional theory. Results showed that PG and 3N-PG exhibited suitable pore sizes and relatively high stabilities for He and H2 separation. PG and 3N-PG membranes also presented excellent He and H2 selectivities over Ne, Ar, N2, CO and CH4 at a wide temperature range. 6N-PG membrane exerted unexceptionable permeances of the studied gases, especially He and H2, which could remarkably improve the separation efficiency of He and H2. Analyses on the most stable adsorption configurations and maximum adsorption energies indicated weak Van der Waals interactions between the gases and the three PG-based membranes. Microscopic permeation process analyses based on the minimum energy pathway, energy profiles, and electron density isosurfaces elucidated the remarkable selectivities of He over Ne/CO/N2/Ar/CH4 and H2 over CO/N2/CH4 and the high permeances of He and H2 passing through the three PG-based membranes. This work not only highlighted the potential use of the three PG-based membranes for He separation and H2 purification but also provided a superior alternative strategy to design and screen membrane materials for gas separation.
NASA Astrophysics Data System (ADS)
Lestari, D.; Bustamam, A.; Novianti, T.; Ardaneswari, G.
2017-07-01
DNA sequence can be defined as a succession of letters, representing the order of nucleotides within DNA, using a permutation of four DNA base codes including adenine (A), guanine (G), cytosine (C), and thymine (T). The precise code of the sequences is determined using DNA sequencing methods and technologies, which have been developed since the 1970s and currently become highly developed, advanced and highly throughput sequencing technologies. So far, DNA sequencing has greatly accelerated biological and medical research and discovery. However, in some cases DNA sequencing could produce any ambiguous and not clear enough sequencing results that make them quite difficult to be determined whether these codes are A, T, G, or C. To solve these problems, in this study we can introduce other representation of DNA codes namely Quaternion Q = (PA, PT, PG, PC), where PA, PT, PG, PC are the probability of A, T, G, C bases that could appear in Q and PA + PT + PG + PC = 1. Furthermore, using Quaternion representations we are able to construct the improved scoring matrix for global sequence alignment processes, by applying a dot product method. Moreover, this scoring matrix produces better and higher quality of the match and mismatch score between two DNA base codes. In implementation, we applied the Needleman-Wunsch global sequence alignment algorithm using Octave, to analyze our target sequence which contains some ambiguous sequence data. The subject sequences are the DNA sequences of Streptococcus pneumoniae families obtained from the Genebank, meanwhile the target DNA sequence are received from our collaborator database. As the results we found the Quaternion representations improve the quality of the sequence alignment score and we can conclude that DNA sequence target has maximum similarity with Streptococcus pneumoniae.
A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding
NASA Astrophysics Data System (ADS)
Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae
2017-12-01
High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.
Paddock, Ethan; Hohenadel, Maximilian G; Piaggi, Paolo; Vijayakumar, Pavithra; Hanson, Robert L; Knowler, William C; Krakoff, Jonathan; Chang, Douglas C
2017-09-01
Elevated 2-h plasma glucose concentration (2 h-PG) during a 75 g OGTT predict the development of type 2 diabetes mellitus. However, 1-h plasma glucose concentration (1 h-PG) is associated with insulin secretion and may be a better predictor of type 2 diabetes. We aimed to investigate the association between 1 h-PG and 2 h-PG using gold standard methods for measuring insulin secretion and action. We also compared 1 h-PG and 2 h-PG as predictors of type 2 diabetes mellitus. This analysis included adult volunteers without diabetes, predominantly Native Americans of Southwestern heritage, who were involved in a longitudinal epidemiological study from 1965 to 2007, with a baseline OGTT that included measurement of 1 h-PG. Group 1 (n = 716) underwent an IVGTT and hyperinsulinaemic-euglycaemic clamp for measurement of acute insulin response (AIR) and insulin-stimulated glucose disposal (M), respectively. Some members of Group 1 (n = 490 of 716) and members of a second, larger, group (Group 2; n = 1946) were followed-up to assess the development of type 2 diabetes (median 9.0 and 12.8 years follow-up, respectively). Compared with 2 h-PG (r = -0.281), 1 h-PG (r = -0.384) was more closely associated with AIR, whereas, compared with 1 h-PG (r = -0.340), 2 h-PG (r = -0.408) was more closely associated with M. Measures of 1 h-PG and 2 h-PG had similar abilities to predict type 2 diabetes, which did not change when both were included in the model. A 1 h-PG cut-off of 9.3 mmol/l provided similar levels of sensitivity and specificity as a 2 h-PG cut-off of 7.8 mmol/l; the latter is used to define impaired glucose tolerance, a recognised predictor of type 2 diabetes mellitus. The 1 h-PG was associated with important physiological predictors of type 2 diabetes and was as effective as 2 h-PG for predicting type 2 diabetes mellitus. The 1 h-PG is, therefore, an alternative method of identifying individuals with an elevated risk of type 2 diabetes mellitus.
NASA Astrophysics Data System (ADS)
Falocchi, Marco; Giovannini, Lorenzo; Franceschi, Massimiliano de; Zardi, Dino
2018-05-01
We present a refinement of the recursive digital filter proposed by McMillen (Boundary-Layer Meteorol 43:231-245, 1988), for separating surface-layer turbulence from low-frequency fluctuations affecting the mean flow, especially over complex terrain. In fact, a straightforward application of the filter causes both an amplitude attenuation and a forward phase shift in the filtered signal. As a consequence turbulence fluctuations, evaluated as the difference between the original series and the filtered one, as well as higher-order moments calculated from them, may be affected by serious inaccuracies. The new algorithm (i) produces a rigorous zero-phase filter, (ii) restores the amplitude of the low-frequency signal, and (iii) corrects all filter-induced signal distortions.
Pouillart, P; Madgelenat, H; Jouve, M; Palangie, T; Garcia-Giralt, E; Bretaudeau, B; Polijcak, M; Asselain, B
1982-01-01
102 patients with disseminated breast cancer entered this retrospective study. An estrogen receptor (ER) assay was realized in 91 patients and a progesterone receptor (PgR) assay in 90 cases; 44 per cent of the patients were considered as ER+ and 29 per cent as PgR+; 56 per cent were considered as ER- PgR-. The objective response rate to cytotoxic chemotherapy after 4 months of treatment was 66 per cent for ER-, 73 per cent for ER+, 67 per cent for PgR- and 74 per cent for PgR+. However, the mean duration of response was significantly shorter for ER- patients, and no difference appeared between PgR+ and PgR- patients. The acturial survival curves demonstrated a favorable prognostic significance of ER+ as compared to ER- p = 0,03), but the difference was slightly more significant for PgR+ as compared to PgR- (p = 0,008). The prognostic significance of PgR in patients with advanced breast cancer treated with cytotoxic chemotherapy does not appear to be related to the sensitivity to this treatment.
Mikecz, Katalin; Glant, Tibor T.; Markovics, Adrienn; Rosenthal, Kenneth S.; Kurko, Julia; Carambula, Roy E.; Cress, Steve; Steiner, Harold L.; Zimmerman, Daniel H.
2017-01-01
Rheumatoid arthritis (RA) is an autoimmune joint disease maintained by aberrant immune responses involving CD4+ T helper (Th)1 and Th17 cells. In this study, we tested the therapeutic efficacy of Ligand Epitope Antigen Presentation System (LEAPS™) vaccines in two Th1 cell-driven mouse models of RA, cartilage proteoglycan (PG)-induced arthritis (PGIA) and PG G1-domain-induced arthritis (GIA). The immunodominant PG peptide PG70 was attached to a DerG or J immune cell binding peptide, and the DerG-PG70 and J-PG70 LEAPS vaccines were administered to the mice after the onset of PGIA or GIA symptoms. As indicated by significant decreases in visual and histopathological scores of arthritis, the DerG-PG70 vaccine inhibited disease progression in both PGIA and GIA, while the J-PG70 vaccine was ineffective. Splenic CD4+ cells from DerG-PG70-treated mice were diminished in Th1 and Th17 populations but enriched in Th2 and regulatory T (Treg) cells. In vitro spleen cell-secreted and serum cytokines from DerG-PG70-treated mice demonstrated a shift from a pro-inflammatory to an anti-inflammatory/regulatory profile. DerG-PG70 peptide tetramers preferentially bound to CD4+ T-cells of GIA spleen cells. We conclude that the DerG-PG70 vaccine (now designated CEL-4000) exerts its therapeutic effect by interacting with CD4+ cells, which results in an antigen-specific down-modulation of pathogenic T-cell responses in both the PGIA and GIA models of RA. Future studies will need to determine the potential of LEAPS vaccination to provide disease suppression in patients with RA. PMID:28583308
Altimeter measurements for the determination of the Earth's gravity field
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Schutz, B. E.; Shum, C. K.
1986-01-01
Progress in the following areas is described: refining altimeter and altimeter crossover measurement models for precise orbit determination and for the solution of the earth's gravity field; performing experiments using altimeter data for the improvement of precise satellite ephemerides; and analyzing an optimal relative data weighting algorithm to combine various data types in the solution of the gravity field.
NASA Astrophysics Data System (ADS)
Sukhanov, AY
2017-02-01
We present an approximation Voigt contour for some parameters intervals such as the interval with y less than 0.02 and absolute value x less than 1.6 gives a simple formula for calculating and relative error less than 0.1%, and for some of the intervals suggetsted to use Hermite quadrature.
More About Vector Adaptive/Predictive Coding Of Speech
NASA Technical Reports Server (NTRS)
Jedrey, Thomas C.; Gersho, Allen
1992-01-01
Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.
2D photonic crystal complete band gap search using a cyclic cellular automaton refination
NASA Astrophysics Data System (ADS)
González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.
2014-11-01
We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.
Multiscale Simulation of Gas Film Lubrication During Liquid Droplet Collision
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Khare, Prashant; Ma, Dongjun; Yang, Vigor
2012-02-01
Droplet collision plays an elementary role in dense spray combustion process. When two droplets approach each other, a gas film forms in between. The pressure generated within the film prevents motion of approaching droplets. This fluid mechanics is fluid film lubrication that occurs when opposing bearing surfaces are completely separated by fluid film. The lubrication flow in gas film decides the collision outcome, coalescence or bouncing. Present study focuses on gas film drainage process over a wide range of Weber numbers during equal- and unequal-sized droplet collision. The formulation is based on complete set of conservation equations for both liquid and surrounding gas phases. An improved volume-of-fluid technique, augmented by an adaptive mesh refinement algorithm, is used to track liquid/gas interfaces. A unique thickness-based refinement algorithm based on topology of interfacial flow is developed and implemented to efficiently resolve the multiscale problem. The grid size on interface is up O(10-4) of droplet size with a max resolution of 0.015 μm. An advanced visualization technique using the Ray-tracing methodology is used to gain direct insights to detailed physics. Theories are established by analyzing the characteristics of shape changing and flow evolution.
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
NASA Astrophysics Data System (ADS)
Heister, Timo; Dannberg, Juliane; Gassmöller, Rene; Bangerth, Wolfgang
2017-08-01
Computations have helped elucidate the dynamics of Earth's mantle for several decades already. The numerical methods that underlie these simulations have greatly evolved within this time span, and today include dynamically changing and adaptively refined meshes, sophisticated and efficient solvers, and parallelization to large clusters of computers. At the same time, many of the methods - discussed in detail in a previous paper in this series - were developed and tested primarily using model problems that lack many of the complexities that are common to the realistic models our community wants to solve today. With several years of experience solving complex and realistic models, we here revisit some of the algorithm designs of the earlier paper and discuss the incorporation of more complex physics. In particular, we re-consider time stepping and mesh refinement algorithms, evaluate approaches to incorporate compressibility, and discuss dealing with strongly varying material coefficients, latent heat, and how to track chemical compositions and heterogeneities. Taken together and implemented in a high-performance, massively parallel code, the techniques discussed in this paper then allow for high resolution, 3-D, compressible, global mantle convection simulations with phase transitions, strongly temperature dependent viscosity and realistic material properties based on mineral physics data.
NASA Astrophysics Data System (ADS)
Bouaynaya, N.; Schonfeld, Dan
2005-03-01
Many real world applications in computer and multimedia such as augmented reality and environmental imaging require an elastic accurate contour around a tracked object. In the first part of the paper we introduce a novel tracking algorithm that combines a motion estimation technique with the Bayesian Importance Sampling framework. We use Adaptive Block Matching (ABM) as the motion estimation technique. We construct the proposal density from the estimated motion vector. The resulting algorithm requires a small number of particles for efficient tracking. The tracking is adaptive to different categories of motion even with a poor a priori knowledge of the system dynamics. Particulary off-line learning is not needed. A parametric representation of the object is used for tracking purposes. In the second part of the paper, we refine the tracking output from a parametric sample to an elastic contour around the object. We use a 1D active contour model based on a dynamic programming scheme to refine the output of the tracker. To improve the convergence of the active contour, we perform the optimization over a set of randomly perturbed initial conditions. Our experiments are applied to head tracking. We report promising tracking results in complex environments.
Discriminative object tracking via sparse representation and online dictionary learning.
Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua
2014-04-01
We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.
The GOES-R Proving Ground: 2012 Update
NASA Astrophysics Data System (ADS)
Gurka, J.; Goodman, S. J.; Schmit, T.; Demaria, M.; Mostek, A.; Siewert, C.; Reed, B.
2011-12-01
The Geostationary Operational Environmental Satellite (GOES)-R will provide a great leap forward in observing capabilities, but will also offer a significant challenge to ensure that users are ready to exploit the vast improvements in spatial, spectral, and temporal resolutions. To ensure user readiness, forecasters and other users must have access to prototype advanced products well before launch, and have the opportunity to provide feedback to product developers and computing and communications managers. The operational assessment is critical to ensure that the end products and NOAA's computing and communications systems truly meet their needs in a rapidly evolving environment. The GOES-R Proving Ground (PG) engages the National Weather Service (NWS) forecast, watch and warning community and other agency users in pre-operational demonstrations of select products with GOES-R attributes (enhanced spectral, spatial, and temporal resolution). In the PG, developers and forecasters test and apply algorithms for new GOES-R satellite data and products using proxy and simulated data sets, including observations from current and future satellite instruments (MODIS, AIRS, IASI, SEVIRI, NAST-I, NPP/VIIRS/CrIS, LIS), lightning networks, and computer simulated products. The complete list of products to be evaluated in 2012 will be determined after evaluating results from experiments in 2011 at the NWS' Storm Prediction Center, National Hurricane Center, Aviation Weather Center, Ocean Prediction Center, Hydrometeorological Prediction Center, and from the six NWS regions. In 2012 and beyond, the PG will test and validate data processing and distribution systems and the applications of these products in operational settings. Additionally developers and forecasters will test and apply display techniques and decision aid tools in operational environments. The PG is both a recipient and a source of training. Training materials are developed using various distance training tools in close collaboration with NWS Training Division and its partners at COMET, CIMSS, CIRA and other offices. The training is used to prepare the participants of PG activities, such as the Hazardous Weather Testbed's Spring Experiment and other locations listed above. A key component of the proving ground is two-way interaction, where researchers introduce new products and techniques to forecasters and other scientists. The forecasters and other users then provide feedback and ideas for improved or new products and how to best incorporate these into NOAA's integrated observing and analysis operations. This presentation will provide examples of GOES-R proxy products and forecaster evaluations from experiments at the Storm Prediction Center (SPC), the National Hurricane Center (NHC), the Aviation Weather Center (AWC), and the Alaska Region.
Embryos aggregation improves development and imprinting gene expression in mouse parthenogenesis.
Bai, Guang-Yu; Song, Si-Hang; Wang, Zhen-Dong; Shan, Zhi-Yan; Sun, Rui-Zhen; Liu, Chun-Jia; Wu, Yan-Shuang; Li, Tong; Lei, Lei
2016-04-01
Mouse parthenogenetic embryonic stem cells (PgESCs) could be applied to study imprinting genes and are used in cell therapy. Our previous study found that stem cells established by aggregation of two parthenogenetic embryos at 8-cell stage (named as a2 PgESCs) had a higher efficiency than that of PgESCs, and the paternal expressed imprinting genes were observably upregulated. Therefore, we propose that increasing the number of parthenogenetic embryos in aggregation may improve the development of parthenogenetic mouse and imprinting gene expression of PgESCs. To verify this hypothesis, we aggregated four embryos together at the 4-cell stage and cultured to the blastocyst stage (named as 4aPgB). qPCR detection showed that the expression of imprinting genes Igf2, Mest, Snrpn, Igf2r, H19, Gtl2 in 4aPgB were more similar to that of fertilized blastocyst (named as fB) compared to 2aPgB (derived from two 4-cell stage parthenogenetic embryos aggregation) or PgB (single parthenogenetic blastocyst). Post-implantation development of 4aPgB extended to 11 days of gestation. The establishment efficiency of GFP-a4 PgESCs which derived from GFP-4aPgB is 62.5%. Moreover, expression of imprinting genes Igf2, Mest, Snrpn, notably downregulated and approached the level of that in fertilized embryonic stem cells (fESCs). In addition, we acquired a 13.5-day fetus totally derived from GFP-a4 PgESCs with germline contribution by 8-cell under zona pellucida (ZP) injection. In conclusion, four embryos aggregation improves parthenogenetic development, and compensates imprinting genes expression in PgESCs. It implied that a4 PgESCs could serve as a better scientific model applied in translational medicine and imprinting gene study. © 2016 Japanese Society of Developmental Biologists.
Laube, Beth L.; Afshar-Mohajer, Nima; Koehler, Kirsten; Chen, Gang; Lazarus, Philip; Collaco, Joseph M.; McGrath-Morrow, Sharon A.
2017-01-01
Objective To determine the effect of an acute (1 week) and chronic (3 weeks) exposure to E-cigarette (E-cig) emissions on mucociliary clearance (MCC) in murine lungs. Methods C57BL/6 male mice (age 10.5 ±2.4 weeks) were exposed for 20min/day to E-cigarette aerosol generated by a Joyetech 510-T® E-cig containing either 0% nicotine (N)/propylene glycol (PG) for 1 week (n = 6), or 3 weeks (n = 9), or 2.4% N/PG for one week (n = 6), or 3 weeks (n = 9), followed by measurement of MCC. Control mice (n = 15) were not exposed to PG alone, or N/PG. MCC was assessed by gamma camera following aspiration of 99mtechnetium aerosol and was expressed as the amount of radioactivity removed from both lungs over 6 hours (MCC6hrs). Venous blood was assayed for cotinine levels in control mice and in mice exposed for 3-weeks to PG alone and N/PG. Results MCC6hrs in control mice and in mice acutely exposed to PG alone and N/PG was similar, averaging (±1 standard deviation) 8.6±5.2%, 7.5±2.8% and 11.2±5.9%, respectively. In contrast, chronic exposure to PG alone stimulated MCC6hrs (17.2 ±8.0)% and this stimulation was significantly blunted following chronic exposure to N/PG (8.7 ±4.6)% (p < .05). Serum cotinine levels were <0.5ng/ml in control mice and in mice exposed to PG alone, whereas, N/PG exposed mice averaged 14.6 ± 12.0 ng/ml. Conclusions In this murine model, a chronic, daily, 20 min-exposure to N/PG, but not an acute exposure, slowed MCC, compared to exposure to PG alone and led to systemic absorption of nicotine. PMID:28651446
Effect of simulated acid rain on fluorine mobility and the bacterial community of phosphogypsum.
Wang, Mei; Tang, Ya; Anderson, Christopher W N; Jeyakumar, Paramsothy; Yang, Jinyan
2018-06-01
Contamination of soil and water with fluorine (F) leached from phosphogypsum (PG) stacks is a global environmental issue. Millions of tons of PG is produced each year as a by-product of fertilizer manufacture, and in China, weathering is exacerbated by acid rain. In this work, column leaching experiments using simulated acid rain were run to evaluate the mobility of F and the impact of weathering on native bacterial community composition in PG. After a simulated summer rainfall, 2.42-3.05 wt% of the total F content of PG was leached and the F concentration in leachate was above the quality standard for surface water and groundwater in China. Acid rain had no significant effect on the movement of F in PG. A higher concentration of F was observed at the bottom than the top section of PG columns suggesting mobility and reprecipitation of F. Throughout the simulation, the PG was environmentally safe according the TCLP testing. The dominant bacteria in PG were from the Enterococcus and Bacillus genus. Bacterial community composition in PG leached by simulated acid rain (pH 3.03) was more abundant than at pH 6.88. Information on F mobility and bacterial community in PG under conditions of simulated rain is relevant to management of environmental risk in stockpiled PG waste.
NASA Astrophysics Data System (ADS)
Kuassivi; Bonanno, A.; Ferlet, R.
2005-11-01
We report the detection of pulsations in the far ultraviolet (FUV) light curves of PG 1219+534, PG 1605+072 and PG 1613+426 obtained with the Far Ultraviolet Spectroscopic Explorer (FUSE) in time tagged mode (TTAG). Exposures of the order of a few ksec were sufficient to observe the main frequencies of PG 1219+534 and PG 1605+072 and confirm the detection of a pulsation mode at the surface of PG 1613+426 as reported from ground. For the first time we derive time resolved spectroscopic FUSE data of a sdB pulsator (PG 1605+072) and comment on its line profile variation diagram (lpv diagram). We observe the phase shift between the maximum luminosity and the maximum radius to be consistent with the model of an adiabatic pulsator. We also present evidence that the line broadening previously reported is not caused by rotation but is rather an observational bias due to the rapid Doppler shift of the lines with 17 km s-1 amplitude. Thus our observations do not support the previous claim that PG 1605+072 is (or will evolve into) an unusually fast rotating degenerate dwarf. These results demonstrate the asteroseismological potential of the FUSE satellite which should be viewed as another powerful means of investigating stellar pulsations, along with the MOST and COROT missions.
Gastric mucosal status in populations with a low prevalence of Helicobacter pylori in Indonesia.
Miftahussurur, Muhammad; Nusi, Iswan Abbas; Akil, Fardah; Syam, Ari Fahrial; Wibawa, I Dewa Nyoman; Rezkitha, Yudith Annisa Ayu; Maimunah, Ummi; Subsomwong, Phawinee; Parewangi, Muhammad Luthfi; Mariadi, I Ketut; Adi, Pangestu; Uchida, Tomohisa; Purbayu, Herry; Sugihartono, Titong; Waskito, Langgeng Agung; Hidayati, Hanik Badriyah; Lusida, Maria Inge; Yamaoka, Yoshio
2017-01-01
In Indonesia, endoscopy services are limited and studies about gastric mucosal status by using pepsinogens (PGs) are rare. We measured PG levels, and calculated the best cutoff and predictive values for discriminating gastric mucosal status among ethnic groups in Indonesia. We collected gastric biopsy specimens and sera from 233 patients with dyspepsia living in three Indonesian islands. When ≥5.5 U/mL was used as the best cutoff value of Helicobacter pylori antibody titer, 8.6% (20 of 233) were positive for H. pylori infection. PG I and II levels were higher among smokers, and PG I was higher in alcohol drinkers than in their counterparts. PG II level was significantly higher, whereas PG I/II ratios were lower in H. pylori-positive than in H. pylori-negative patients. PG I/II ratios showed a significant inverse correlation with the inflammation and atrophy scores of the antrum. The best cutoff values of PG I/II were 4.05 and 3.55 for discriminating chronic and atrophic gastritis, respectively. PG I, PG II, and PG I/II ratios were significantly lower in subjects from Bangli than in those from Makassar and Surabaya, and concordant with the ABC group distribution; however, group D (H. pylori negative/PG positive) was the lowest in subjects from Bangli. In conclusion, validation of indirect methods is necessary before their application. We confirmed that serum PG level is a useful biomarker determining chronic gastritis, but a modest sensitivity for atrophic gastritis in Indonesia. The ABC method should be used with caution in areas with a low prevalence of H. pylori.
Dubald, M; Barakate, A; Mandaron, P; Mache, R
1993-11-01
Exopolygalacturonase (exoPG) is a pectin-degrading enzyme abundant in maize pollen. Using immunochemistry and in situ hybridization it is shown that in addition to its presence in pollen, exoPG is also present in sporophytic tissues, such as the tapetum and mesophyll cells. The enzyme is located in the cytoplasm of pollen and of some mesophyll cells. In other mesophyll cells, the tapetum and the pollen tube, exoPG is located in the cell wall. The measurement of enzyme activity shows that exoPG is ubiquitous in the vegetative organs. These results suggest a general function for exoPG in cell wall edification or degradation. ExoPG is encoded by a closely related multigene family. The regulation of the expression of one of the exoPG genes was analyzed in transgenic tobacco. Reporter GUS activity was detected in anthers, seeds and stems but not in leaves or roots of transgenic plants. This strongly suggests that the ubiquitous presence of exoPG in maize is the result of the expression of different exoPG genes.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudipta; Deb, Debasis
2016-07-01
Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.
Comparing MODIS C6 'Deep Blue' and 'Dark Target' Aerosol Data
NASA Technical Reports Server (NTRS)
Hsu, N. C.; Sayer, A. M.; Bettenhausen, C.; Lee, J.; Levy, R. C.; Mattoo, S.; Munchak, L. A.; Kleidman, R.
2014-01-01
The MODIS Collection 6 Atmospheres product suite includes refined versions of both 'Deep Blue' (DB) and 'Dark Target' (DT) aerosol algorithms, with the DB dataset now expanded to include coverage over vegetated land surfaces. This means that, over much of the global land surface, users will have both DB and DT data to choose from. A 'merged' dataset is also provided, primarily for visualization purposes, which takes retrievals from either or both algorithms based on regional and seasonal climatologies of normalized difference vegetation index (NDVI). This poster present some comparisons of these two C6 aerosol algorithms, focusing on AOD at 550 nm derived from MODIS Aqua measurements, with each other and with Aerosol Robotic Network (AERONET) data, with the intent to facilitate user decisions about the suitability of the two datasets for their desired applications.
Estimation of electric fields and current from ground-based magnetometer data
NASA Technical Reports Server (NTRS)
Kamide, Y.; Richmond, A. D.
1984-01-01
Recent advances in numerical algorithms for estimating ionospheric electric fields and currents from groundbased magnetometer data are reviewed and evaluated. Tests of the adequacy of one such algorithm in reproducing large-scale patterns of electrodynamic parameters in the high-latitude ionosphere have yielded generally positive results, at least for some simple cases. Some encouraging advances in producing realistic conductivity models, which are a critical input, are pointed out. When the algorithms are applied to extensive data sets, such as the ones from meridian chain magnetometer networks during the IMS, together with refined conductivity models, unique information on instantaneous electric field and current patterns can be obtained. Examples of electric potentials, ionospheric currents, field-aligned currents, and Joule heating distributions derived from ground magnetic data are presented. Possible directions for future improvements are also pointed out.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2018-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
Development and Application of a Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Fulton, Christopher E.; Maul, William A.; Sowers, T. Shane
2007-01-01
This paper describes the development and initial demonstration of a Portable Health Algorithms Test (PHALT) System that is being developed by researchers at the NASA Glenn Research Center (GRC). The PHALT System was conceived as a means of evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT System allows systems health management algorithms to be developed in a graphical programming environment; to be tested and refined using system simulation or test data playback; and finally, to be evaluated in a real-time hardware-in-the-loop mode with a live test article. In this paper, PHALT System development is described through the presentation of a functional architecture, followed by the selection and integration of hardware and software. Also described is an initial real-time hardware-in-the-loop demonstration that used sensor data qualification algorithms to diagnose and isolate simulated sensor failures in a prototype Power Distribution Unit test-bed. Success of the initial demonstration is highlighted by the correct detection of all sensor failures and the absence of any real-time constraint violations.
A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem
NASA Astrophysics Data System (ADS)
Pourrahimian, Parinaz
2017-11-01
Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.
Multiresolution strategies for the numerical solution of optimal control problems
NASA Astrophysics Data System (ADS)
Jain, Sachin
There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a nonlinear programming (NLP) problem that is solved using standard NLP codes. The novelty of the proposed approach hinges on the automatic calculation of a suitable, nonuniform grid over which the NLP problem is solved, which tends to increase numerical efficiency and robustness. Control and/or state constraints are handled with ease, and without any additional computational complexity. The proposed algorithm is based on a simple and intuitive method to balance several conflicting objectives, such as accuracy of the solution, convergence, and speed of the computations. The benefits of the proposed algorithm over uniform grid implementations are demonstrated with the help of several nontrivial examples. Furthermore, two sequential multiresolution trajectory optimization algorithms for solving problems with moving targets and/or dynamically changing environments have been developed. For such problems, high accuracy is desirable only in the immediate future, yet the ultimate mission objectives should be accommodated as well. An intelligent trajectory generation for such situations is thus enabled by introducing the idea of multigrid temporal resolution to solve the associated trajectory optimization problem on a non-uniform grid across time that is adapted to: (i) immediate future, and (ii) potential discontinuities in the state and control variables.
NASA Astrophysics Data System (ADS)
Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky
2018-06-01
We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian-Lagrangian approach. This enables us to refine the computational mesh in the vicinity of the droplet parcels and accurately resolve the coupling between the two phases.
Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng
2014-06-01
Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating a good discrimination capability. In this study, crucial attributes in stress evaluation were successfully recognized using biomedical signals, thereby enabling the conservation of medical resources and elucidating the mapping relationship between levels of mental stress and candidate attributes. In addition, we developed a prototype system for mental stress evaluation that can be used to provide benefits in medical practice. Copyright © 2014. Published by Elsevier B.V.
A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.
Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe
2012-04-01
We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.
NASA Astrophysics Data System (ADS)
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
NASA Astrophysics Data System (ADS)
Arai, Tatsuya; Lee, Kichang; Stenger, Michael B.; Platts, Steven H.; Meck, Janice V.; Cohen, Richard J.
2011-04-01
Orthostatic intolerance (OI) is a significant challenge for astronauts after long-duration spaceflight. Depending on flight duration, 20-80% of astronauts suffer from post-flight OI, which is associated with reduced vascular resistance. This paper introduces a novel algorithm for continuously monitoring changes in total peripheral resistance (TPR) by processing the peripheral arterial blood pressure (ABP). To validate, we applied our novel mathematical algorithm to the pre-flight ABP data previously recorded from twelve astronauts ten days before launch. The TPR changes were calculated by our algorithm and compared with the TPR value estimated using cardiac output/heart rate before and after phenylephrine administration. The astronauts in the post-flight presyncopal group had lower pre-flight TPR changes (1.66 times) than those in the non-presyncopal group (2.15 times). The trend in TPR changes calculated with our algorithm agreed with the TPR trend calculated using measured cardiac output in the previous study. Further data collection and algorithm refinement are needed for pre-flight detection of OI and monitoring of continuous TPR by analysis of peripheral arterial blood pressure.
Age at onset of DSM-IV pathological gambling in a non-treatment sample: Early- versus later-onset.
Black, Donald W; Shaw, Martha; Coryell, William; Crowe, Raymond; McCormick, Brett; Allen, Jeff
2015-07-01
Pathological gambling (PG) is a prevalent and impairing public health problem. In this study we assessed age at onset in men and women with PG and compared the demographic and clinical picture of early- vs. later-onset individuals. We also compared age at onset in PG subjects and their first-degree relatives with PG. Subjects with DSM-IV PG were recruited during the conduct of two non-treatment clinical studies. Subjects were evaluated with structured interviews and validated questionnaires. Early-onset was defined as PG starting prior to age 33years. Age at onset of PG in the 255 subjects ranged from 8 to 80years with a mean (SD) of 34.0 (15.3) years. Men had an earlier onset than women. 84% of all subjects with PG had developed the disorder by age 50years. Early-onset subjects were more likely to be male, to prefer action games, and to have substance use disorders, antisocial personality disorder, attention deficit/hyperactivity disorder, trait impulsiveness, and social anxiety disorder. Later-onset was more common in women and was associated with a preference for slots and a history of sexual abuse. Age at onset of PG is bimodal and differs for men and women. Early-onset PG and later-onset PG have important demographic and clinical differences. The implications of the findings are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.
Object recognition and localization from 3D point clouds by maximum-likelihood estimation
NASA Astrophysics Data System (ADS)
Dantanarayana, Harshana G.; Huntley, Jonathan M.
2017-08-01
We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.
de Bock, Martin; Dart, Julie; Roy, Anirban; Davey, Raymond; Soon, Wayne; Berthold, Carolyn; Retterath, Adam; Grosman, Benyamin; Kurtz, Natalie; Davis, Elizabeth; Jones, Timothy
2017-01-01
Hypoglycemia remains a risk for closed loop insulin delivery particularly following exercise or if the glucose sensor is inaccurate. The aim of this study was to test whether an algorithm that includes a limit to insulin delivery is effective at protecting against hypoglycemia under those circumstances. An observational study on 8 participants with type 1 diabetes was conducted, where a hybrid closed loop system (HCL) (Medtronic™ 670G) was challenged with hypoglycemic stimuli: exercise and an overreading glucose sensor. There was no overnight or exercise-induced hypoglycemia during HCL insulin delivery. All daytime hypoglycemia was attributable to postmeal bolused insulin in those participants with a more aggressive carbohydrate factor. HCL systems rely on accurate carbohydrate ratios and carbohydrate counting to avoid hypoglycemia. The algorithm that was tested against moderate exercise and an overreading glucose sensor performed well in terms of hypoglycemia avoidance. Algorithm refinement continues in preparation for long-term outpatient trials.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng
2018-04-01
The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.
Zhang, Yang
2014-01-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925
Zhang, Yang
2014-02-01
We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Whorton, M. S.
1998-01-01
Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.
Naito, Mariko; Sato, Keiko; Shoji, Mikio; Yukitake, Hideharu; Ogura, Yoshitoshi; Hayashi, Tetsuya; Nakayama, Koji
2011-07-01
In our previous study, extensive genomic rearrangements were found in two strains of the Gram-negative anaerobic bacterium Porphyromonas (Por.) gingivalis, and most of these rearrangements were associated with mobile genetic elements such as insertion sequences and conjugative transposons (CTns). CTnPg1, identified in Por. gingivalis strain ATCC 33277, was the first complete CTn reported for the genus Porphyromonas. In the present study, we found that CTnPg1 can be transferred from strain ATCC 33277 to another Por. gingivalis strain, W83, at a frequency of 10(-7) to 10(-6). The excision of CTnPg1 from the chromosome in a donor cell depends on an integrase (Int; PGN_0094) encoded in CTnPg1, whereas CTnPg1 excision is independent of PGN_0084 (a DNA topoisomerase I homologue; Exc) encoded within CTnPg1 and recA (PGN_1057) on the donor chromosome. Intriguingly, however, the transfer of CTnPg1 between Por. gingivalis strains requires RecA function in the recipient. Sequencing analysis of CTnPg1-integrated sites on the chromosomes of transconjugants revealed that the consensus attachment (att) sequence is a 13 bp sequence, TTTTCNNNNAAAA. We further report that CTnPg1 is able to transfer to two other bacterial species, Bacteroides thetaiotaomicron and Prevotella oralis. In addition, CTnPg1-like CTns are located in the genomes of other oral anaerobic bacteria, Porphyromonas endodontalis, Prevotella buccae and Prevotella intermedia, with the same consensus att sequence. These results suggest that CTns in the CTnPg1 family are widely distributed among oral anaerobic Gram-negative bacteria found in humans and play important roles in horizontal gene transfer among these bacteria.
Arivizhivendhan, K V; Mahesh, M; Boopathy, R; Patchaimurugan, K; Maharaja, P; Swarnalatha, S; Regina Mary, R; Sekaran, G
2016-09-15
Prodigiosin (PG) is a bioactive compound produced by several bacterial species. Currently, many technologies are being developed for the production of PG by fermentation processes. However, new challenges are being faced with regard to the production of PG in terms of the recovery and purification steps, owing to the labile nature of PG molecules and the cost of the purification steps. Conventional methods have limitations due to high cost, low reusability, and health hazards. Hence, the present investigation was focused on the development of surface-functionalized magnetic iron oxide ([Fe3O4]F) for solvent-free extraction of bioactive PG from the bacterial fermented medium. Fe3O4 was functionalized with diethanolamine and characterized by FT-IR, diffuse reflectance spectroscopy, thermogravimetric analysis, scanning electron microscopy, and confocal microscopy. The various process parameters, such as contact time, temperature, pH, and mass of Fe3O4, were optimized for the extraction of PG using functionalized Fe3O4. Instrumental analyses confirmed that the PG molecules were cross-linked with functional groups on [Fe3O4]F through van der Waals forces of attraction. PG extracted through Fe3O4 or [Fe3O4]F was separated from the fermentation medium by applying an external electromagnetic field and regenerated for successive reuse cycles. The purity of the extracted PG was characterized by high-performance liquid chromatography, FT-IR, and UV-visible spectroscopy. The iron oxide-diethanolamine-PG cross-linked ([Fe3O4]F-PG) composite matrix effectively deactivates harmful fouling by cyanobacterial growth in water-treatment plants. The present investigation provides the possibility of solvent-free extraction of bacterial bioactive PG from a fermented medium using functionalized magnetic iron oxide.
Gastric mucosal status in populations with a low prevalence of Helicobacter pylori in Indonesia
Miftahussurur, Muhammad; Nusi, Iswan Abbas; Akil, Fardah; Syam, Ari Fahrial; Wibawa, I. Dewa Nyoman; Rezkitha, Yudith Annisa Ayu; Maimunah, Ummi; Subsomwong, Phawinee; Parewangi, Muhammad Luthfi; Mariadi, I. Ketut; Adi, Pangestu; Uchida, Tomohisa; Purbayu, Herry; Sugihartono, Titong; Waskito, Langgeng Agung; Hidayati, Hanik Badriyah; Lusida, Maria Inge
2017-01-01
In Indonesia, endoscopy services are limited and studies about gastric mucosal status by using pepsinogens (PGs) are rare. We measured PG levels, and calculated the best cutoff and predictive values for discriminating gastric mucosal status among ethnic groups in Indonesia. We collected gastric biopsy specimens and sera from 233 patients with dyspepsia living in three Indonesian islands. When ≥5.5 U/mL was used as the best cutoff value of Helicobacter pylori antibody titer, 8.6% (20 of 233) were positive for H. pylori infection. PG I and II levels were higher among smokers, and PG I was higher in alcohol drinkers than in their counterparts. PG II level was significantly higher, whereas PG I/II ratios were lower in H. pylori-positive than in H. pylori-negative patients. PG I/II ratios showed a significant inverse correlation with the inflammation and atrophy scores of the antrum. The best cutoff values of PG I/II were 4.05 and 3.55 for discriminating chronic and atrophic gastritis, respectively. PG I, PG II, and PG I/II ratios were significantly lower in subjects from Bangli than in those from Makassar and Surabaya, and concordant with the ABC group distribution; however, group D (H. pylori negative/PG positive) was the lowest in subjects from Bangli. In conclusion, validation of indirect methods is necessary before their application. We confirmed that serum PG level is a useful biomarker determining chronic gastritis, but a modest sensitivity for atrophic gastritis in Indonesia. The ABC method should be used with caution in areas with a low prevalence of H. pylori. PMID:28463979
Black, Donald W; Coryell, William H; Crowe, Raymond R; Shaw, Martha; McCormick, Brett; Allen, Jeff
2015-12-01
This study investigates the presence of personality disorders, impulsiveness, and novelty seeking in probands with DSM-IV pathological gambling (PG), controls, and their respective first-degree relatives using a blind family study methodology. Ninety-three probands with DSM-IV PG, 91 controls, and their 395 first-degree relatives were evaluated for the presence of personality disorder with the Structured Interview for DSM-IV Personality. Impulsiveness was assessed with the Barratt Impulsiveness Scale (BIS). Novelty seeking was evaluated using questions from Cloninger's Temperament and Character Inventory. Results were analyzed using logistic regression by the method of generalized estimating equations to account for within family correlations. PG probands had a significantly higher prevalence of personality disorders than controls (41 vs. 7 %, OR = 9.0, P < 0.001), along with higher levels of impulsiveness and novelty seeking. PG probands with a personality disorder had more severe gambling symptoms; earlier age at PG onset; more suicide attempts; greater psychiatric comorbidity; and a greater family history of psychiatric illness than PG probands without a personality disorder. PG relatives had a significantly higher prevalence of personality disorder than relatives of controls (24 vs. 9%, OR = 3.2, P < 0.001) and higher levels of impulsiveness. Risk for PG in relatives is associated with the presence of personality disorder and increases along with rising BIS Non-Planning and Total scale scores. Personality disorders, impulsiveness, and novelty seeking are common in people with PG and their first-degree relatives. The presence of a personality disorder appears to be a marker of PG severity and earlier age of onset. Risk for PG in relatives is associated with the presence of personality disorder and trait impulsiveness. These findings suggest that personality disorder and impulsiveness may contribute to a familial diathesis for PG.
Choi, Jung-Seok; Shin, Young-Chul; Jung, Wi Hoon; Jang, Joon Hwan; Kang, Do-Hyung; Choi, Chi-Hoon; Choi, Sam-Wook; Lee, Jun-Young; Hwang, Jae Yeon; Kwon, Jun Soo
2012-01-01
Background Pathological gambling (PG) and obsessive-compulsive disorder (OCD) are conceptualized as a behavioral addiction, with a dependency on repetitive gambling behavior and rewarding effects following compulsive behavior, respectively. However, no neuroimaging studies to date have examined reward circuitry during the anticipation phase of reward in PG compared with in OCD while considering repetitive gambling and compulsion as addictive behaviors. Methods/Principal Findings To elucidate the neural activities specific to the anticipation phase of reward, we performed event-related functional magnetic resonance imaging (fMRI) in young adults with PG and compared them with those in patients with OCD and healthy controls. Fifteen male patients with PG, 13 patients with OCD, and 15 healthy controls, group-matched for age, gender, and IQ, participated in a monetary incentive delay task during fMRI scanning. Neural activation in the ventromedial caudate nucleus during anticipation of both gain and loss decreased in patients with PG compared with that in patients with OCD and healthy controls. Additionally, reduced activation in the anterior insula during anticipation of loss was observed in patients with PG compared with that in patients with OCD which was intermediate between that in OCD and healthy controls (healthy controls < PG < OCD), and a significant positive correlation between activity in the anterior insula and South Oaks Gambling Screen score was found in patients with PG. Conclusions Decreased neural activity in the ventromedial caudate nucleus during anticipation may be a specific neurobiological feature for the pathophysiology of PG, distinguishing it from OCD and healthy controls. Correlation of anterior insular activity during loss anticipation with PG symptoms suggests that patients with PG fit the features of OCD associated with harm avoidance as PG symptoms deteriorate. Our findings have identified functional disparities and similarities between patients with PG and OCD related to the neural responses associated with reward anticipation. PMID:23029329
Mikecz, Katalin; Glant, Tibor T; Markovics, Adrienn; Rosenthal, Kenneth S; Kurko, Julia; Carambula, Roy E; Cress, Steve; Steiner, Harold L; Zimmerman, Daniel H
2017-07-13
Rheumatoid arthritis (RA) is an autoimmune joint disease maintained by aberrant immune responses involving CD4+ T helper (Th)1 and Th17 cells. In this study, we tested the therapeutic efficacy of Ligand Epitope Antigen Presentation System (LEAPS™) vaccines in two Th1 cell-driven mouse models of RA, cartilage proteoglycan (PG)-induced arthritis (PGIA) and PG G1-domain-induced arthritis (GIA). The immunodominant PG peptide PG70 was attached to a DerG or J immune cell binding peptide, and the DerG-PG70 and J-PG70 LEAPS vaccines were administered to the mice after the onset of PGIA or GIA symptoms. As indicated by significant decreases in visual and histopathological scores of arthritis, the DerG-PG70 vaccine inhibited disease progression in both PGIA and GIA, while the J-PG70 vaccine was ineffective. Splenic CD4+ cells from DerG-PG70-treated mice were diminished in Th1 and Th17 populations but enriched in Th2 and regulatory T (Treg) cells. In vitro spleen cell-secreted and serum cytokines from DerG-PG70-treated mice demonstrated a shift from a pro-inflammatory to an anti-inflammatory/regulatory profile. DerG-PG70 peptide tetramers preferentially bound to CD4+ T-cells of GIA spleen cells. We conclude that the DerG-PG70 vaccine (now designated CEL-4000) exerts its therapeutic effect by interacting with CD4+ cells, which results in an antigen-specific down-modulation of pathogenic T-cell responses in both the PGIA and GIA models of RA. Future studies will need to determine the potential of LEAPS vaccination to provide disease suppression in patients with RA. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Initial effect of the Fukushima accident on atmospheric electricity
NASA Astrophysics Data System (ADS)
Takeda, M.; Yamauchi, M.; Makino, M.; Owada, T.
2011-08-01
Vertical atmospheric DC electric field at ground level, or potential gradient (PG), suddenly dropped by one order of magnitude at Kakioka, 150 km southwest from the Fukushima Dai-ichi nuclear power plant (FNPP) right after the plant released a massive amount of radioactive material southward on 14 March, 2011. The PG stayed at this level for days with very small daily variations. Such a long-lasting near-steady low PG has never been observed at Kakioka. The sudden drop of PG with one-hour time scale is similar to those associated with rain-induced radioactive fallout after nuclear tests and the Chernobyl disaster. A comparison with the PG data with the radiation dose rate data at different places revealed that arrival of the radioactive dust by low-altitude wind caused the PG drop without rain. Furthermore, the PG might have reflected a minor release several hours before this release at the distance of 150 km. It is recommended that all nuclear power plant to have a network of PG observation surrounding the plant.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Wang, Y.; Shu, C.
2017-12-01
This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.
Laube, Beth L; Afshar-Mohajer, Nima; Koehler, Kirsten; Chen, Gang; Lazarus, Philip; Collaco, Joseph M; McGrath-Morrow, Sharon A
2017-04-01
To determine the effect of an acute (1 week) and chronic (3 weeks) exposure to E-cigarette (E-cig) emissions on mucociliary clearance (MCC) in murine lungs. C57BL/6 male mice (age 10.5 ± 2.4 weeks) were exposed for 20 min/day to E-cigarette aerosol generated by a Joyetech 510-T ® E-cig containing either 0% nicotine (N)/propylene glycol (PG) for 1 week (n = 6), or 3 weeks (n = 9), or 2.4% N/PG for one week (n = 6), or 3 weeks (n = 9), followed by measurement of MCC. Control mice (n = 15) were not exposed to PG alone, or N/PG. MCC was assessed by gamma camera following aspiration of 99m technetium aerosol and was expressed as the amount of radioactivity removed from both lungs over 6 hours (MCC6hrs). Venous blood was assayed for cotinine levels in control mice and in mice exposed for 3-weeks to PG alone and N/PG. MCC6hrs in control mice and in mice acutely exposed to PG alone and N/PG was similar, averaging (±1 standard deviation) 8.6 ± 5.2%, 7.5 ± 2.8% and 11.2 ± 5.9%, respectively. In contrast, chronic exposure to PG alone stimulated MCC6hrs (17.2 ± 8.0)% and this stimulation was significantly blunted following chronic exposure to N/PG (8.7 ± 4.6)% (p < .05). Serum cotinine levels were <0.5 ng/ml in control mice and in mice exposed to PG alone, whereas, N/PG exposed mice averaged 14.6 ± 12.0 ng/ml. In this murine model, a chronic, daily, 20 min-exposure to N/PG, but not an acute exposure, slowed MCC, compared to exposure to PG alone and led to systemic absorption of nicotine.
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Portable Language-Independent Adaptive Translation from OCR. Phase 1
2009-04-01
including brute-force k-Nearest Neighbors ( kNN ), fast approximate kNN using hashed k-d trees, classification and regression trees, and locality...achieved by refinements in ground-truthing protocols. Recent algorithmic improvements to our approximate kNN classifier using hashed k-D trees allows...recent years discriminative training has been shown to outperform phonetic HMMs estimated using ML for speech recognition. Standard ML estimation
NASA Technical Reports Server (NTRS)
Kogut, J.
1981-01-01
The NIMBUS 7 Scanning Multichannel Microwave Radiometer (SMMR) data are analyzed. The impact of cross polarization and Faraday rotation on SMMR derived brightness temperatures is evaluated. The algorithms used to retrieve the geophysical parameters are tested, refined, and compared with values derived by other techniques. The technical approach taken is described and the results presented.
Proglucagons in vertebrates: Expression and processing of multiple genes in a bony fish.
Busby, Ellen R; Mommsen, Thomas P
2016-09-01
In contrast to mammals, where a single proglucagon (PG) gene encodes three peptides: glucagon, glucagon-like peptide 1 and glucagon-like peptide 2 (GLP-1; GLP-2), many non-mammalian vertebrates carry multiple PG genes. Here, we investigate proglucagon mRNA sequences, their tissue expression and processing in a diploid bony fish. Copper rockfish (Sebastes caurinus) express two independent genes coding for distinct proglucagon sequences (PG I, PG II), with PG II lacking the GLP-2 sequence. These genes are differentially transcribed in the endocrine pancreas, the brain, and the gastrointestinal tract. Alternative splicing identified in rockfish is only one part of this complex regulation of the PG transcripts: the system has the potential to produce two glucagons, four GLP-1s and a single GLP-2, or any combination of these peptides. Mass spectrometric analysis of partially purified PG-derived peptides in endocrine pancreas confirms translation of both PG transcripts and differential processing of the resulting peptides. The complex differential regulation of the two PG genes and their continued presence in this extant teleostean fish strongly suggests unique and, as yet largely unidentified, roles for the peptide products encoded in each gene. Copyright © 2016 Elsevier Inc. All rights reserved.
Shakleya, Diaa M.
2011-01-01
A validated method for simultaneous LCMSMS quantification of nicotine, cocaine, 6-acetylmorphine (6AM), codeine, and metabolites in 100 mg fetal human brain was developed and validated. After homogenization and solid-phase extraction, analytes were resolved on a Hydro-RP analytical column with gradient elution. Empirically determined linearity was from 5–5,000 pg/mg for cocaine and benzoylecgonine (BE), 25–5,000 pg/mg for cotinine, ecgonine methyl ester (EME) and 6AM, 50–5000 pg/mg for trans-3-hydroxycotinine (OH-cotinine) and codeine, and 250–5,000 pg/mg for nicotine. Potential endogenous and exogenous interferences were resolved. Intra- and inter-assay analytical recoveries were ≥92%, intra- and inter-day and total assay imprecision were ≤14% RSD and extraction efficiencies were ≥67.2% with ≤83% matrix effect. Method applicability was demonstrated with a postmortem fetal brain containing 40 pg/mg cotinine, 65 pg/mg OH-cotinine, 13 pg/mg cocaine, 34 pg/mg EME, and 525 pg/mg BE. This validated method is useful for determination of nicotine, opioid, and cocaine biomarkers in brain. PMID:19229524
STUDIES ON THE MECHANISM OF THE FORMATION OF THE PENICILLIN ANTIGEN
Levine, Bernard B.
1960-01-01
Seven highly purified degradation products of penicillin G (PG) were examined with regard to their ability to cross-react allergically with PG. Guinea pig allergic contact dermatitis was employed as the test system. Three of these degradation products, D-benzylpenicillenic acid (BPE), D-penicillamine, and D-α-benzylpenicilloic acid were found to cross-react with PG and also to be capable of inducing delayed contact allergy in the guinea pig. BPE and PG cross-reacted with particularly intense reactions, and other immunologic experiments indicated that PG and BPE introduce identical allergic determinant groups into epidermal proteins. These experimental results were correlated with the results of previous studies concerning the degradation pathways of PG under physiological conditions in vitro, and the chemical reactivities of these degradation products. Based on these immunologic and chemical data, a schema is proposed which suggests the chemical pathways by which PG may react with epidermal proteins in vivo to form the penicillin antigen. The identity of the specific antigenic determinant groups of the penicillin antigen is suggested. The relationship between PG allergy of the contact dermatitis type in the guinea pig and PG allergy of the immediate type in man is discussed. PMID:13761469
PG&E Pacific Gas & Electric (PG&E) customers are eligible for a $10,000 rebate for the , bring the Customer Information Form and a copy of a recent PG&E utility bill to a participating dealership. For more information, visit PG&E
1985-06-01
G.R., Principles of Management , pg. 391, Richard D. Irwin, Inc., Homewood, Ill., 1977. 12. Benner, P.E., Stress and Satisfaction on the Job, pg. 180...Harris, O.J., Managing People at Work, pg. 339, Wiley and Sons, Santa Barbara, Ca., 1976. 15. Terry, G.R., Principles of Management , pg. 330
A Strategic Management Plan to Adopt a New Methodology for Treating Total Joint Replacement Patients
2007-06-28
approaches. Market penetration will be accomplished by community seminars, increased advertisement and word of mouth by highly satisfied patients. The...External Environment pg. 13 Service Area Competitor Analysis pg. 15 Internal Analysis pg. 21 Directional Strategies pg. 25 Adaptive pg. 27 Market Entry...making them unable to pursue a low or set price marketing strategy . Most suppliers do not dictate which brand implant a physician may use. This flexibility
The Quantitative Analysis of bFGF and VEGF by ELISA in Human Meningiomas
Denizot, Yves; De Armas, Rafael; Caire, François; Moreau, Jean Jacques; Pommepuy, Isabelle; Truffinet, Véronique; Labrousse, François
2006-01-01
The quantitative analysis of VEGF using ELISA in various subtypes of grade I meningiomas reported higher VEGF contents in meningothelial (2.38 ± 0.62 pg/μg protein, n = 7), transitional (1.08 ± 0.21 pg/μg protein, n = 13), and microcystic meningiomas (1.98 ± 0.87 pg/μg protein, n = 5) as compared with fibrous ones (0.36 ± 0.09 pg/μg protein, n = 5). In contrast to VEGF, no difference in the concentrations of bFGF was detected. VEGF levels did not correlate with meningioma grade (1.47 ± 0.23 pg/μg versus 2.29 ± 0.58 pg/μg for 32 and 16 grade I and II, resp), vascularisation (1.53 ± 0.41 pg/μg versus 1.96 ± 0.28 pg/μg for 24 low and 24 high vascularisated tumours, resp), and brain invasion (2.32 ± 0.59 pg/μg versus 1.46 ± 0.27 pg/μg for 7 and 41 patients with and without invasion, resp). The ELISA procedure is, thus, an interesting tool to ensure VEGF and bFGF levels in meningiomas and to test putative correlations with clinical parameters. It is, thus, tempting to speculate that ELISA would also be valuable for the quantitative analysis of other angiogenic growth factors and cytokines in intracranial tumours. PMID:17392584
Mertens, Jeffrey A; Bowman, Michael J
2011-04-01
Polygalacturonase (PG) enzymes hydrolyze the long polygalacturonic acid chains found in the smooth regions of pectin. Interest in this enzyme class continues due to their ability to macerate tissues of economically important crops and their use in a number of industrial processes. Rhizopus oryzae has a large PG gene family with 15 of 18 genes encoding unique active enzymes. The PG enzymes, 12 endo-PG and 3 exo-galacturonases, were expressed in Pichia pastoris and purified enabling biochemical characterization to gain insight into the maintenance of this large gene family within the Rhizopus genome. The 15 PG enzymes have a pH optima ranging from 4.0 to 5.0. Temperature optima of the 15 PG enzymes vary from 30 to 40 °C. While the pH and temperature optima do little to separate the enzymes, the specific activity of the enzymes is highly variable ranging from over 200 to less than 1 μmol/min/mg. A general pattern related to the groupings found in the phylogentic tree was visible with the group containing the exo-PG enzymes demonstrating the lowest specific activity. Finally, the progress curves of the PG enzymes, contained within the phylogenetic group that includes the exo-PG enzymes, acting on trigalacturonic acid lend additional support to the idea that the ancestral form of PG in Rhizopus is endolytic and exolytic function evolved later.
Kim, Yong-Kyoung; Kim, Yeon Bok; Uddin, Md Romij; Lee, Sanghyun; Kim, Soo-Un; Park, Sang Un
2014-10-17
To elucidate the function of mevalonate-5-pyrophosphate decarboxylase (MVD) and farnesyl pyrophosphate synthase (FPS) in triterpene biosynthesis, the genes governing the expression of these enzymes were transformed into Panax ginseng hairy roots. All the transgenic lines showed higher expression levels of PgMVD and PgFPS than that by the wild-type control. Among the hairy root lines transformed with PgMVD, M18 showed the highest level of transcription compared to the control (14.5-fold higher). Transcriptions of F11 and F20 transformed with PgFPS showed 11.1-fold higher level compared with control. In triterpene analysis, M25 of PgMVD produced 4.4-fold higher stigmasterol content (138.95 μg/100 mg, dry weight [DW]) than that by the control; F17 of PgFPS showed the highest total ginsenoside (36.42 mg/g DW) content, which was 2.4-fold higher compared with control. Our results indicate that metabolic engineering in P. ginseng was successfully achieved through Agrobacterium rhizogenes-mediated transformation and that the accumulation of phytosterols and ginsenosides was enhanced by introducing the PgMVD and PgFPS genes into the hairy roots of the plant. Our results suggest that PgMVD and PgFPS play an important role in the triterpene biosynthesis of P. ginseng.
Rahimzadeh, Mahsa; Poodat, Manijeh; Javadpour, Sedigheh; Qeshmi, Fatemeh Izadpanah; Shamsipour, Fereshteh
2016-01-01
Background: L-asparaginase has been used as a chemotherapeutic agent in treatment of lymphoblastic leukemia. In the present investigation, Bacillus sp. PG03 and Bacillus sp. PG04 were studied. Methods: L- asparaginases were produced using different culture media and were purified using ion exchange chromatography. Results: Maximum productivity was obtained when asparagine was used as the nitrogen source at pH 7 and 48 h after cultivation. New intracellular L-asparaginases showed an apparent molecular weight of 25 kDa and 30 kDa by SDS-PAGE respectively. These enzymes were active in a wide pH range (3-9) with maximum activity at pH 6 for Bacillus PG03 and pH 7 for Bacillus PG04 L-asparaginase. Bacillus PG03 enzyme was optimally active at 37 ˚C and Bacillus PG04 maximum activity was observed at 40˚C. Kinetic parameters km and Vmax of both enzymes were studied using L-asparagine as the substrate. Thermal inactivation studies of Bacillus PG03 and Bacillus PG04 L-asparaginase exhibited t1/2 of 69.3 min and 34.6 min in 37 ˚C respectively. Also T50 and ∆G of inactivation were measured for both enzymes. Conclusion: The results revealed that both enzymes had appropriate characteristics and thus could be a potential candidate for medical applications. PMID:27999622
Galantini, Luciano; Di Matteo, Adele; Pavel, Nicolae Viorel; De Lorenzo, Giulia; Cervone, Felice; Federici, Luca; Sicilia, Francesca
2013-01-01
Polygalacturonases (PGs) are secreted by phytopathogenic fungi to degrade the plant cell wall homogalacturonan during plant infection. To counteract Pgs, plants have evolved polygalacturonase-inhibiting proteins (PGIPs) that slow down fungal infection and defend cell wall integrity. PGIPs favour the accumulation of oligogalacturonides, which are homogalacturonan fragments that act as endogenous elicitors of plant defence responses. We have previously shown that PGIP2 from Phaseolus vulgaris (PvPGIP2) forms a complex with PG from Fusarium phyllophilum (FpPG), hindering the enzyme active site cleft from substrate. Here we analyse by small angle X-ray scattering (SAXS) the interaction between PvPGIP2 and a PG from Colletotrichum lupini (CluPG1). We show a different shape of the PG-PGIP complex, which allows substrate entry and provides a structural explanation for the different inhibition kinetics exhibited by PvPGIP2 towards the two isoenzymes. The analysis of SAXS structures allowed us to investigate the basis of the inability of PG from Fusarium verticilloides (FvPG) to be inhibited by PvPGIP2 or by any other known PGIP. FvPG is 92.5% identical to FpPG, and we show here, by both loss- and gain-of-function mutations, that a single amino acid site acts as a switch for FvPG recognition by PvPGIP2. PMID:24260434
Lin, Yuan-Chuan; Lin, Chih-Hsueh; Yao, Hsien-Tsung; Kuo, Wei-Wen; Shen, Chia-Yao; Yeh, Yu-Lan; Ho, Tsung-Jung; Padma, V Vijaya; Lin, Yu-Chen; Huang, Chih-Yang; Huang, Chih-Yang
2017-06-09
Platycodon grandiflorum (PG) is a Chinese medical plant used for decades as a traditional prescription to eliminate phlegm, relieve cough, reduce inflammation and lower blood pressure. PG also has a significant effect on the cardiovascular systems. The aqueous extract of Platycodon grandiflorum (JACQ.) A. DC. root was screened for inhibiting Ang II-induced IGF-IIR activation and apoptosis pathway in H9c2 cardiomyocytes. The effects were also studied in spontaneously hypertensive rats (five groups, n=5) using low and high doses of PG for 50 days. The Ang II-induced IGF-IIR activation was analyzed by luciferase reporter, RT-PCR, western blot and surface IGF-IIR expression assay. Furthermore, the major active constituent of PG was carried out by high performance liquid chromatography-mass spectrometry (HPLC-MS). Our results indicate that a crude extract of PG significantly suppresses the Ang II-induced IGF-IIR signaling pathway to prevent cardiomyocyte apoptosis. PG extract inhibits Ang II-mediated JNK activation and SIRT1 degradation to reduce IGF-IIR activity. Moreover, PG maintains SIRT1 stability to enhance HSF1-mediated IGF-IIR suppression, which prevents cardiomyocyte apoptosis. In animal models, the administration of PG markedly reduced this apoptotic pathway in the heart of SHRs. Taken together, PG may be considered as an effective treatment for cardiac diseases in hypertensive patients. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Molecular imaging assessment of periodontitis lesions in an experimental mouse model.
Ideguchi, Hidetaka; Yamashiro, Keisuke; Yamamoto, Tadashi; Shimoe, Masayuki; Hongo, Shoichi; Kochi, Shinsuke; Yoshihara-Hirata, Chiaki; Aoyagi, Hiroaki; Kawamura, Mari; Takashiba, Shogo
2018-06-06
We aimed to evaluate molecular imaging as a novel diagnostic tool for mice periodontitis model induced by ligature and Porphyromonas gingivalis (Pg) inoculation. Twelve female mice were assigned to the following groups: no treatment as control group (n = 4); periodontitis group induced by ligature and Pg as Pg group (n = 4); and Pg group treated with glycyrrhizinic acid (GA) as Pg + GA group (n = 4). All mice were administered a myeloperoxidase (MPO) activity-specific luminescent probe and observed using a charge-coupled device camera on day 14. Image analysis on all mice was conducted using software to determine the signal intensity of inflammation. Additionally, histological and radiographic evaluation for periodontal inflammation and bone resorption at the site of periodontitis, and quantitative enzyme-linked immunosorbent assay (ELISA) were conducted on three mice for each group. Each experiment was performed three times. Levels of serum IgG antibody against P. gingivalis were significantly higher in the Pg than in the Pg + GA group. Histological analyses indicated that the number of osteoclasts and neutrophils were significantly lower in the Pg + GA than in the Pg group. Micro-CT image analysis indicated no difference in bone resorption between the Pg and Pg + GA groups. The signal intensity of MPO activity was detected on the complete craniofacial image; moreover, strong signal intensity was localized specifically at the periodontitis site in the ex vivo palate, with group-wise differences. Molecular imaging analysis based on MPO activity showed high sensitivity of detection of periodontal inflammation in mice. Molecular imaging analysis based on MPO activity has potential as a diagnostic tool for periodontitis.
Hydrogen storage capacity on Ti-decorated porous graphene: First-principles investigation
NASA Astrophysics Data System (ADS)
Yuan, Lihua; Kang, Long; Chen, Yuhong; Wang, Daobin; Gong, Jijun; Wang, Chunni; Zhang, Meiling; Wu, Xiaojuan
2018-03-01
Hydrogen storage capacity on Titanium (Ti) decorated porous graphene (PG) has been investigated using density functional theory simulations with generalized gradient approximation method. The possible adsorption sites of Ti atom on PG and electronic properties of Ti-PG system are also discussed.The results show a Ti atom prefers to strongly adsorb on the center site above the C hexagon with the binding energy of 3.65 eV, and the polarization and the hybridization mechanisms both contribute to the Ti atom adsorption on PG. To avoid a tendency of clustering among Ti atoms, the single side of the PG unit cell should only contain one Ti atom. For the single side of PG, four H2 molecules can be adsorbed around Ti atom, and the adsorption mechanism of H2 molecules come from not only the polarization mechanism between Ti and H atoms but also the orbital hybridization among Ti atom, H2 molecules and C atoms. For the case of double sides of PG, eight H2 molecules can be adsorbed on Ti-decorated PG unit cell with the average adsorption energy of -0.457 eV, and the gravimetric hydrogen storage capacity is 6.11 wt.%. Furthermore, ab inito molecular-dynaics simulation result shows that six H2 molecules can be adsorbed on double sides of unit cell of Ti-PG system and the configuration of Ti-PG is very stable at 300 K and without external pressure, which indicates Ti-decorated PG could be considered as a potential hydrogen storage medium at ambient conditions.
Guzik, Stephen M.; Gao, Xinfeng; Owen, Landon D.; ...
2015-12-20
We present a fourth-order accurate finite-volume method for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Some novel considerations for formulating the semi-discrete system of equations in computational space are combined with detailed mechanisms for accommodating the adapting grids. Furthermore, these considerations ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). The solution in time is advanced with a fourth-order Runge-Kutta method. A series of tests verifies that the expected accuracy is achieved in smooth flows and the solution ofmore » a Mach reflection problem demonstrates the effectiveness of the algorithm in resolving strong discontinuities.« less
Generalization and refinement of an automatic landing system capable of curved trajectories
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1976-01-01
Refinements in the lateral and longitudinal guidance for an automatic landing system capable of curved trajectories were studied. Wing flaps or drag flaps (speed brakes) were found to provide faster and more precise speed control than autothrottles. In the case of the lateral control it is shown that the use of the integral of the roll error in the roll command over the first 30 to 40 seconds of flight reduces the sensitivity of the lateral guidance to the gain on the azimuth guidance angle error in the roll command. Also, changes to the guidance algorithm are given that permit pi-radian approaches and constrain the airplane to fly in a specified plane defined by the position of the airplane at the start of letdown and the flare point.
NASA Astrophysics Data System (ADS)
Bog, Tino; Zander, Nils; Kollmannsberger, Stefan; Rank, Ernst
2018-04-01
The finite cell method (FCM) is a fictitious domain approach that greatly simplifies simulations involving complex structures. Recently, the FCM has been applied to contact problems. The current study continues in this field by extending the concept of weakly enforced boundary conditions to inequality constraints for frictionless contact. Furthermore, it formalizes an approach that automatically recovers high-order contact surfaces of (implicitly defined) embedded geometries by means of an extended Marching Cubes algorithm. To further improve the accuracy of the discretization, irregularities at the boundary of contact zones are treated with multi-level hp-refinements. Numerical results and a systematic study of h-, p- and hp-refinements show that the FCM can efficiently provide accurate results for problems involving contact.
B-dot algorithm steady-state motion performance
NASA Astrophysics Data System (ADS)
Ovchinnikov, M. Yu.; Roldugin, D. S.; Tkachev, S. S.; Penkov, V. I.
2018-05-01
Satellite attitude motion subject to the well-known B-dot magnetic control is considered. Unlike the majority of studies the present work focuses on the slowly rotating spacecraft. The attitude and the angular velocity acquired after detumbling the satellite is determined. This task is performed using two relatively simple geomagnetic field models. First the satellite is considered moving in the simplified dipole model. Asymptotically stable rotation around the axis of the maximum moment of inertia is found. This axis direction in the inertial space and the rotation rate are found. This result is then refined using the direct dipole geomagnetic field. Simple stable rotation transforms into the periodical motion, the rotation rate is also refined. Numerical analysis with the gravitational torque and the inclined dipole model verifies the analytical results.
Sun, Shanhui; Sonka, Milan; Beichel, Reinhard R.
2013-01-01
Recently, the optimal surface finding (OSF) and layered optimal graph image segmentation of multiple objects and surfaces (LOGISMOS) approaches have been reported with applications to medical image segmentation tasks. While providing high levels of performance, these approaches may locally fail in the presence of pathology or other local challenges. Due to the image data variability, finding a suitable cost function that would be applicable to all image locations may not be feasible. This paper presents a new interactive refinement approach for correcting local segmentation errors in the automated OSF-based segmentation. A hybrid desktop/virtual reality user interface was developed for efficient interaction with the segmentations utilizing state-of-the-art stereoscopic visualization technology and advanced interaction techniques. The user interface allows a natural and interactive manipulation on 3-D surfaces. The approach was evaluated on 30 test cases from 18 CT lung datasets, which showed local segmentation errors after employing an automated OSF-based lung segmentation. The performed experiments exhibited significant increase in performance in terms of mean absolute surface distance errors (2.54 ± 0.75 mm prior to refinement vs. 1.11 ± 0.43 mm post-refinement, p ≪ 0.001). Speed of the interactions is one of the most important aspects leading to the acceptance or rejection of the approach by users expecting real-time interaction experience. The average algorithm computing time per refinement iteration was 150 ms, and the average total user interaction time required for reaching complete operator satisfaction per case was about 2 min. This time was mostly spent on human-controlled manipulation of the object to identify whether additional refinement was necessary and to approve the final segmentation result. The reported principle is generally applicable to segmentation problems beyond lung segmentation in CT scans as long as the underlying segmentation utilizes the OSF framework. The two reported segmentation refinement tools were optimized for lung segmentation and might need some adaptation for other application domains. PMID:23415254
New method of 2-dimensional metrology using mask contouring
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Yamagata, Yoshikazu; Sugiyama, Akiyuki; Toyoda, Yasutaka
2008-10-01
We have developed a new method of accurately profiling and measuring of a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, this edge detection method is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. This method realizes two-dimensional metrology for refined pattern that had been difficult to measure conventionally by utilizing high precision contour profile. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. This is to say, demands for quality is becoming strenuous because of enormous quantity of data growth with increasing of refined pattern on photo mask manufacture. In the result, massive amount of simulated error occurs on mask inspection that causes lengthening of mask production and inspection period, cost increasing, and long delivery time. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method of a DFM solution using two-dimensional metrology for refined pattern.
Spectral types of four binaries based on photometric observations
NASA Astrophysics Data System (ADS)
Shimanskii, V. V.; Bikmaev, I. F.; Borisov, N. V.; Vlasyuk, V. V.; Galeev, A. I.; Sakhibullin, N. A.; Spiridonova, O. I.
2008-09-01
We present results of photometric and spectroscopic observations of four close binaries with subdwarf B components: PG 0918+029, PG 1000+408, PG 1116+301, PG 0001+275. We discovered that PG 1000+408 is a close binary, with the most probable orbital period being P orb = 1.041145 day. Based on a comparison of the observed light curves at selected orbital phases and theoretical predictions for their variations, all the systems are classified as doubly degenerate binaries with low-luminosity white-dwarf secondaries.
Shoemaker, W C; Patil, R; Appel, P L; Kram, H B
1992-11-01
A generalized decision tree or clinical algorithm for treatment of high-risk elective surgical patients was developed from a physiologic model based on empirical data. First, a large data bank was used to do the following: (1) describe temporal hemodynamic and oxygen transport patterns that interrelate cardiac, pulmonary, and tissue perfusion functions in survivors and nonsurvivors; (2) define optimal therapeutic goals based on the supranormal oxygen transport values of high-risk postoperative survivors; (3) compare the relative effectiveness of alternative therapies in a wide variety of clinical and physiologic conditions; and (4) to develop criteria for titration of therapy to the endpoints of the supranormal optimal goals using cardiac index (CI), oxygen delivery (DO2), and oxygen consumption (VO2) as proxy outcome measures. Second, a general purpose algorithm was generated from these data and tested in preoperatively randomized clinical trials of high-risk surgical patients. Improved outcome was demonstrated with this generalized algorithm. The concept that the supranormal values represent compensations that have survival value has been corroborated by several other groups. We now propose a unique approach to refine the generalized algorithm to develop customized algorithms and individualized decision analysis for each patient's unique problems. The present article describes a preliminary evaluation of the feasibility of artificial intelligence techniques to accomplish individualized algorithms that may further improve patient care and outcome.
Portable Health Algorithms Test System
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.
2010-01-01
A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.
Konkaev, A K; Eltaeva, A A; Zabolotskikh, I B; Musaeva, T S; Dibvik, L Z; Kuklin, V N
2016-11-01
Efficacy Safety Score (ESS) with "call-out algorithm" developed in Kongsberg hospital, Norway was used for the validation. ESS consists of the mathematical sum ofscorefrom: 2 subjective (Visual Analog Scale: VAS at rest and during mobilization) and 4 vital (conscious levels, PONV circulation and respiration status) parameters and ESS > 10 is a "call-out alarm "for visit ofpatient by anaesthesiologist. Hourly registration of ESS, mobility degree and amounts of analgetics during the first 8 hours after surgery was recorded in the specially designed IPad program. According to the type ofanaesthesia all patients were allocated in 4 groups: I spinal anaesthesia (SA), II general anesthesia (GA), III peripheral blockade (PB) and IV Total intravenous anaesthesia (TIVA). A total of 223 patients were included in the study. Statistically low levels of both VAS and ESS in the first 2-4 postoperative hours were found in SA and PB groups compared to GA and TIVA groups. During 8 post-operative hours, VAS> 3 was recorded in 10.5% of SA, 13.9% in GA, 12.8% in PG and 23.5% in TIVA patients. Intramuscular postoperative analgesia was effective in SA, GA and PG groups. More attention of anaesthesiologist must be paid to patients ofter TIVA.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
NASA Astrophysics Data System (ADS)
Li, Gaohua; Fu, Xiang; Wang, Fuxin
2017-10-01
The low-dissipation high-order accurate hybrid up-winding/central scheme based on fifth-order weighted essentially non-oscillatory (WENO) and sixth-order central schemes, along with the Spalart-Allmaras (SA)-based delayed detached eddy simulation (DDES) turbulence model, and the flow feature-based adaptive mesh refinement (AMR), are implemented into a dual-mesh overset grid infrastructure with parallel computing capabilities, for the purpose of simulating vortex-dominated unsteady detached wake flows with high spatial resolutions. The overset grid assembly (OGA) process based on collection detection theory and implicit hole-cutting algorithm achieves an automatic coupling for the near-body and off-body solvers, and the error-and-try method is used for obtaining a globally balanced load distribution among the composed multiple codes. The results of flows over high Reynolds cylinder and two-bladed helicopter rotor show that the combination of high-order hybrid scheme, advanced turbulence model, and overset adaptive mesh refinement can effectively enhance the spatial resolution for the simulation of turbulent wake eddies.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Intrusion-Tolerant Location Information Services in Intelligent Vehicular Networks
NASA Astrophysics Data System (ADS)
Yan, Gongjun; Yang, Weiming; Shaner, Earl F.; Rawat, Danda B.
Intelligent Vehicular Networks, known as Vehicle-to-Vehicle and Vehicle-to-Roadside wireless communications (also called Vehicular Ad hoc Networks), are revolutionizing our daily driving with better safety and more infortainment. Most, if not all, applications will depend on accurate location information. Thus, it is of importance to provide intrusion-tolerant location information services. In this paper, we describe an adaptive algorithm that detects and filters the false location information injected by intruders. Given a noisy environment of mobile vehicles, the algorithm estimates the high resolution location of a vehicle by refining low resolution location input. We also investigate results of simulations and evaluate the quality of the intrusion-tolerant location service.
The use of Landsat data to inventory cotton and soybean acreage in North Alabama
NASA Technical Reports Server (NTRS)
Downs, S. W., Jr.; Faust, N. L.
1980-01-01
This study was performed to determine if Landsat data could be used to improve the accuracy of the estimation of cotton acreage. A linear classification algorithm and a maximum likelihood algorithm were used for computer classification of the area, and the classification was compared with ground truth. The classification accuracy for some fields was greater than 90 percent; however, the overall accuracy was 71 percent for cotton and 56 percent for soybeans. The results of this research indicate that computer analysis of Landsat data has potential for improving upon the methods presently being used to determine cotton acreage; however, additional experiments and refinements are needed before the method can be used operationally.
A parallel finite-difference method for computational aerodynamics
NASA Technical Reports Server (NTRS)
Swisshelm, Julie M.
1989-01-01
A finite-difference scheme for solving complex three-dimensional aerodynamic flow on parallel-processing supercomputers is presented. The method consists of a basic flow solver with multigrid convergence acceleration, embedded grid refinements, and a zonal equation scheme. Multitasking and vectorization have been incorporated into the algorithm. Results obtained include multiprocessed flow simulations from the Cray X-MP and Cray-2. Speedups as high as 3.3 for the two-dimensional case and 3.5 for segments of the three-dimensional case have been achieved on the Cray-2. The entire solver attained a factor of 2.7 improvement over its unitasked version on the Cray-2. The performance of the parallel algorithm on each machine is analyzed.
Oceanic tidal signals in magnetic satellite data
NASA Astrophysics Data System (ADS)
Wardinski, I.; Lesur, V.
2015-12-01
In this study we discuss the observation of oceanic tidal signals in magnetic satellite data. We analyse 10 years of CHAMP satellite data. The detection algorithm is applied on residual signal that remains after the derivation of GRIMM 42 (Lesur et al., 2015). The signals found represent the major tidal constituents, such as the M2 tide. However, other tidal constituents appear to be swallowed by unmodelled external and induced magnetic signal, particularly in equatorial and circumpolar regions. A part of the study also focuses on the temporal variability of the signal detection and its dependence on geomagnetic activity. Possible refinements to the detection algorithm and its applicability to SWARM data are also presented and discussed.
Recursive inverse factorization.
Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N
2008-03-14
A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
Automated Optimization of Potential Parameters
Michele, Di Pierro; Ron, Elber
2013-01-01
An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115
NASA Astrophysics Data System (ADS)
Govorov, Michael; Gienko, Gennady; Putrenko, Viktor
2018-05-01
In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.
2003-06-01
012580983 VECURONIUM BROMIDE10S PG $142.73 4 79 6505 012695637 AMPICILLIN SOD 1GM10S PG $6.81 20 80 6505 012752568 AMPICILLIN &SULBACTAM PG $143.41 2...VI $9.70 1 61 6505 012580983 VECURONIUM BROMIDE10S PG $142.73 4 62 6505 013069504 METRONIDAZOLE TABS100 BT $4.85 2 63 6505 013306269 SODIUM CHL INJ0.9...INJ 10S PG $192.71 1 67 6505 014163228 ISOPROPYL ALCOHOL USP BT $5.15 1 68 6505 014437076 EPHEDRINE SULFATE INJ PG $17.57 2 69 6505 014437083
Spectral Analysis of PG 1034+001, the Exciting Star of Hewett 1
NASA Technical Reports Server (NTRS)
Kruk, J. W.; Mahsereci, M.; Ringat, E.; Rauch, T.; Werner, K.
2011-01-01
PG 1034+001 is an extremely hot, helium-rich DO-type star that excites the planetary nebula Hewett 1 and large parts of the surrounding interstellar medium. We present preliminary results of an ongoing spectral analysis by means of non-LTE model atmospheres that consider most elements from hydrogen to nickel. This analysis is based on high-resolution ultraviolet (FUSE, IUE) and optical (VLT/UVES, KECK) data. The results are compared with those of PG 1034+001's spectroscopic twin, the DO star PG 0038+ 199. Keywords. stars: abundances, stars: AGB and post-AGB, stars: atmospheres, stars: evolution, stars: individual (PG 1034+001, PG 0038+ 199), planetary nebulae: individual (Hewett 1)
2018-01-01
We explore the ideas and advances surrounding the genetic basis of pigment dispersion syndrome (PDS) and pigmentary glaucoma (PG). As PG is the leading cause of nontraumatic blindness in young adults and current tailored interventions have proven ineffective, a better understanding of the underlying causes of PDS, PG, and their relationship is essential. Despite PDS being a subclinical disease, a large proportion of patients progress to PG with associated vision loss. Decades of research have supported a genetic component both for PDS and conversion to PG. We review the body of evidence supporting a genetic basis in humans and animal models and reevaluate classical mechanisms of PDS/PG considering this new evidence. PMID:29780638
Lahola-Chomiak, Adrian A; Walter, Michael A
2018-01-01
We explore the ideas and advances surrounding the genetic basis of pigment dispersion syndrome (PDS) and pigmentary glaucoma (PG). As PG is the leading cause of nontraumatic blindness in young adults and current tailored interventions have proven ineffective, a better understanding of the underlying causes of PDS, PG, and their relationship is essential. Despite PDS being a subclinical disease, a large proportion of patients progress to PG with associated vision loss. Decades of research have supported a genetic component both for PDS and conversion to PG. We review the body of evidence supporting a genetic basis in humans and animal models and reevaluate classical mechanisms of PDS/PG considering this new evidence.
Data-directed RNA secondary structure prediction using probabilistic modeling
Deng, Fei; Ledda, Mirko; Vaziri, Sana; Aviran, Sharon
2016-01-01
Structure dictates the function of many RNAs, but secondary RNA structure analysis is either labor intensive and costly or relies on computational predictions that are often inaccurate. These limitations are alleviated by integration of structure probing data into prediction algorithms. However, existing algorithms are optimized for a specific type of probing data. Recently, new chemistries combined with advances in sequencing have facilitated structure probing at unprecedented scale and sensitivity. These novel technologies and anticipated wealth of data highlight a need for algorithms that readily accommodate more complex and diverse input sources. We implemented and investigated a recently outlined probabilistic framework for RNA secondary structure prediction and extended it to accommodate further refinement of structural information. This framework utilizes direct likelihood-based calculations of pseudo-energy terms per considered structural context and can readily accommodate diverse data types and complex data dependencies. We use real data in conjunction with simulations to evaluate performances of several implementations and to show that proper integration of structural contexts can lead to improvements. Our tests also reveal discrepancies between real data and simulations, which we show can be alleviated by refined modeling. We then propose statistical preprocessing approaches to standardize data interpretation and integration into such a generic framework. We further systematically quantify the information content of data subsets, demonstrating that high reactivities are major drivers of SHAPE-directed predictions and that better understanding of less informative reactivities is key to further improvements. Finally, we provide evidence for the adaptive capability of our framework using mock probe simulations. PMID:27251549
3D forward modeling and response analysis for marine CSEMs towed by two ships
NASA Astrophysics Data System (ADS)
Zhang, Bo; Yin, Chang-Chun; Liu, Yun-He; Ren, Xiu-Yan; Qi, Yan-Fu; Cai, Jing
2018-03-01
A dual-ship-towed marine electromagnetic (EM) system is a new marine exploration technology recently being developed in China. Compared with traditional marine EM systems, the new system tows the transmitters and receivers using two ships, rendering it unnecessary to position EM receivers at the seafloor in advance. This makes the system more flexible, allowing for different configurations (e.g., in-line, broadside, and azimuthal and concentric scanning) that can produce more detailed underwater structural information. We develop a three-dimensional goal-oriented adaptive forward modeling method for the new marine EM system and analyze the responses for four survey configurations. Oceanbottom topography has a strong effect on the marine EM responses; thus, we develop a forward modeling algorithm based on the finite-element method and unstructured grids. To satisfy the requirements for modeling the moving transmitters of a dual-ship-towed EM system, we use a single mesh for each of the transmitter locations. This mitigates the mesh complexity by refining the grids near the transmitters and minimizes the computational cost. To generate a rational mesh while maintaining the accuracy for single transmitter, we develop a goal-oriented adaptive method with separate mesh refinements for areas around the transmitting source and those far away. To test the modeling algorithm and accuracy, we compare the EM responses calculated by the proposed algorithm and semi-analytical results and from published sources. Furthermore, by analyzing the EM responses for four survey configurations, we are confirm that compared with traditional marine EM systems with only in-line array, a dual-ship-towed marine system can collect more data.
Gisdon, Florian J; Culka, Martin; Ullmann, G Matthias
2016-10-01
Conjugate peak refinement (CPR) is a powerful and robust method to search transition states on a molecular potential energy surface. Nevertheless, the method was to the best of our knowledge so far only implemented in CHARMM. In this paper, we present PyCPR, a new Python-based implementation of the CPR algorithm within the pDynamo framework. We provide a detailed description of the theory underlying our implementation and discuss the different parts of the implementation. The method is applied to two different problems. First, we illustrate the method by analyzing the gauche to anti-periplanar transition of butane using a semiempirical QM method. Second, we reanalyze the mechanism of a glycyl-radical enzyme, namely of 4-hydroxyphenylacetate decarboxylase (HPD) using QM/MM calculations. In the end, we suggest a strategy how to use our implementation of the CPR algorithm. The integration of PyCPR into the framework pDynamo allows the combination of CPR with the large variety of methods implemented in pDynamo. PyCPR can be used in combination with quantum mechanical and molecular mechanical methods (and hybrid methods) implemented directly in pDynamo, but also in combination with external programs such as ORCA using pDynamo as interface. PyCPR is distributed as free, open source software and can be downloaded from http://www.bisb.uni-bayreuth.de/index.php?page=downloads . Graphical Abstract PyCPR is a search tool for finding saddle points on the potential energy landscape of a molecular system.
CT liver volumetry using geodesic active contour segmentation with a level-set algorithm
NASA Astrophysics Data System (ADS)
Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard
2010-03-01
Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.
Biodegradation of propylene glycol and associated hydrodynamic effects in sand.
Bielefeldt, Angela R; Illangasekare, Tissa; Uttecht, Megan; LaPlante, Rosanna
2002-04-01
At airports around the world, propylene glycol (PG) based fluids are used to de-ice aircraft for safe operation. PG removal was investigated in 15-cm deep saturated sand columns. Greater than 99% PG biodegradation was achieved for all flow rates and loading conditions tested, which decreased the hydraulic conductivity of the sand by 1-3 orders of magnitude until a steady-state minimum was reached. Under constant loading at 120 mg PG/d for 15-30 d, the hydraulic conductivity (K) decreased by 2-2.5 orders of magnitude when the average linear velocity of the water was 4.9-1.4 cm/h. Variable PG loading in recirculation tests resulted in slower conductivity declines and lower final steady-state conductivity than constant PG feeding. After significant sand plugging, endogenous periods of time without PG resulted in significant but partial recovery of the original conductivity. Biomass growth also increased the dispersivity of the sand.
Dannon, Pinhas N.; Lowengrub, Katherine; Gonopolski, Yehudit; Musin, Ernest; Kotler, Moshe
2006-01-01
Pathological gambling (PG) is a prevalent and highly disabling impulse-control disorder. Two dominant phenomenological models for PG have been presented in the literature. According to one model, PG is included as an obsessive-compulsive spectrum disorder, while according to the second model, PG represents a form of nonpharmacologic addiction. In this article, we present an expanded conceptualization of the phenomenology of PG. On the basis of our clinical research experience and a review of data in the field, we propose 3 subtypes of pathological gamblers: the “impulsive” subtype, the “obsessive-compulsive” subtype, and the “addictive” subtype. We also review the current pharmacologic and nonpharmacologic treatment strategies for PG. A further aim of this article is to encourage awareness of the importance of improved screening procedures for the early detection of PG. PMID:17245454
Liu, Guangqing; Xue, Mengwei; Liu, Qinpu; Zhou, Yuming
2017-01-01
Water-soluble monomer APEG-PG-(OH)n were produced and the Structure of APEG-PG-(OH)5 were identified by 1 H-NMR. APEG-PG-(OH)n were copolymerized with maleic anhydride (MA) to synthesize no phosphate and nitrogen free calcium carbonate inhibitor MA/APEG-PG-(OH)n. The structure and thermal property of MA/APEG-PG-(OH)5 were characterized and measured by 1 H-NMR, GPC and TGA. The observation shows that the dosage and n value of MA/APEG-PG-(OH)n plays an important role on CaCO 3 inhibition. MA/APEG-PG-(OH)5 displays superior ability to inhibit the precipitation of calcium carbonate, with approximately 97% inhibition at a level of 8 mg/L. The effect on formation of CaCO 3 was investigated with combination of SEM and XRD analysis.