Aveiro method in reproducing kernel Hilbert spaces under complete dictionary
NASA Astrophysics Data System (ADS)
Mai, Weixiong; Qian, Tao
2017-12-01
Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.
Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Haller, Harold S.
2009-01-01
It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.
ERIC Educational Resources Information Center
Raykov, Tenko; Lichtenberg, Peter A.; Paulson, Daniel
2012-01-01
A multiple testing procedure for examining implications of the missing completely at random (MCAR) mechanism in incomplete data sets is discussed. The approach uses the false discovery rate concept and is concerned with testing group differences on a set of variables. The method can be used for ascertaining violations of MCAR and disproving this…
Mackie, Iain D; DiLabio, Gino A
2011-10-07
The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Chimczak, Grzegorz; Lemr, Karel
2017-02-01
We describe a direct method for experimental determination of the negativity of an arbitrary two-qubit state with 11 measurements performed on multiple copies of the two-qubit system. Our method is based on the experimentally accessible sequences of singlet projections performed on up to four qubit pairs. In particular, our method permits the application of the Peres-Horodecki separability criterion to an arbitrary two-qubit state. We explicitly demonstrate that measuring entanglement in terms of negativity requires three measurements more than detecting two-qubit entanglement. The reported minimal set of interferometric measurements provides a complete description of bipartite quantum entanglement in terms of two-photon interference. This set is smaller than the set of 15 measurements needed to perform a complete quantum state tomography of an arbitrary two-qubit system. Finally, we demonstrate that the set of nine Makhlin's invariants needed to express the negativity can be measured by performing 13 multicopy projections. We demonstrate both that these invariants are a useful theoretical concept for designing specialized quantum interferometers and that their direct measurement within the framework of linear optics does not require performing complete quantum state tomography.
Teaching Qualitative Methods: A Face-to-Face Encounter.
ERIC Educational Resources Information Center
Keen, Mike F.
1996-01-01
Considers the complete ethnographic project as a strategy for teaching qualitative methods. Describes an undergraduate class where students chose an ethnographic setting, gathered and analyzed data, and wrote a final report. Settings included Laundromats, bingo halls, auctions, karaoke clubs, and bowling leagues. (MJP)
Niosh analytical methods for Set G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-12-01
Industrial Hygiene sampling and analytical monitoring methods validated under the joint NIOSH/OSHA Standards Completion Program for Set G are contained herein. Monitoring methods for the following compounds are included: butadiene, heptane, ketene, methyl cyclohexane, octachloronaphthalene, pentachloronaphthalene, petroleum distillates, propylene dichloride, turpentine, dioxane, hexane, LPG, naphtha(coal tar), octane, pentane, propane, and stoddard solvent.
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Completion techniques for horizontal wells in the Pearsall Austin Chalk
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C.D.; Handren, P.J.
1992-05-01
Oryx Energy Co. used three basic completion techniques and various combinations of them to complete 20 horizontal wells in the Pearsall Austin Chalk. The completion method selected is based on a general set of guidelines. In this paper additionally, equipment selection and various types of workover operations are reviewed.
Asadi, Abbas; Ramírez-Campillo, Rodrigo
2016-01-01
The aim of this study was to compare the effects of 6-week cluster versus traditional plyometric training sets on jumping ability, sprint and agility performance. Thirteen college students were assigned to a cluster sets group (N=6) or traditional sets group (N=7). Both training groups completed the same training program. The traditional group completed five sets of 20 repetitions with 2min of rest between sets each session, while the cluster group completed five sets of 20 [2×10] repetitions with 30/90-s rest each session. Subjects were evaluated for countermovement jump (CMJ), standing long jump (SLJ), t test, 20-m and 40-m sprint test performance before and after the intervention. Both groups had similar improvements (P<0.05) in CMJ, SLJ, t test, 20-m, and 40-m sprint. However, the magnitude of improvement in CMJ, SLJ and t test was greater for the cluster group (effect size [ES]=1.24, 0.81 and 1.38, respectively) compared to the traditional group (ES=0.84, 0.60 and 0.55). Conversely, the magnitude of improvement in 20-m and 40-m sprint test was greater for the traditional group (ES=1.59 and 0.96, respectively) compared to the cluster group (ES=0.94 and 0.75, respectively). Although both plyometric training methods improved lower body maximal-intensity exercise performance, the traditional sets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Influence of inner circular sealing area impression method on the retention of complete dentures.
Wang, Cun-Wei; Shao, Qi; Sun, Hui-Qiang; Mao, Meng-Yun; Zhang, Xin-Wei; Gong, Qi; Xiao, Guo-Ning
2015-01-01
The aims of the present study were to describe an impression method of "inner circular sealing area" and to evaluate the effect of the method on retention, aesthetics and comfort of complete dentures, which lack labial base for patients with maxillary protrusions. Three patients were subjected to the experiment, and two sets of complete maxillary dentures were made for each patient; the first set was made without labial base via an inner circular sealing area method (experimental group) and the second had an intact base that was made with conventional methods (control group). Retention force tests were implemented with a tensile strength assessment device to assess the retention and a visual analogue scale (VAS) was used to evaluate the comfort between the two groups. Results showed larger retention force, better aesthetics and more comfort in the experimental group. The improved two-step impression method formed an inner circular sealing area that prevented damage to the peripheral border seal effect of the denture caused by incomplete bases and obtained better denture retention.
Preparing a prescription drug monitoring program data set for research purposes.
O'Kane, Nicole; Hallvik, Sara E; Marino, Miguel; Van Otterloo, Joshua; Hildebran, Christi; Leichtling, Gillian; Deyo, Richard A
2016-09-01
To develop a complete and consistent prescription drug monitoring program (PDMP) data set for use by drug safety researchers in evaluating patterns of high-risk use and potential abuse of scheduled drugs. Using publically available data references from the US Food and Drug Administration and the Centers for Disease Control and Prevention, we developed a strategic methodology to assign drug categories based on pharmaceutical class for the majority of prescriptions in the PDMP data set. We augmented data elements required to calculate morphine milligram equivalents and assigned duration of action (short-acting or long acting) properties for a majority of opioids in the data set. About 10% of prescriptions in the PDMP data set did not have a vendor-assigned drug category, and 20% of opioid prescriptions were missing data needed to calculate risk metrics. Using inclusive methods, 19 133 167 (>99.9%) of prescriptions in the PDMP data set were assigned a drug category. For the opioid category, augmenting data elements resulted in 10 760 669 (99.8%) having required values to calculate morphine milligram equivalents and evaluate duration of action properties. Drug safety researchers who require a complete and consistent PDMP data set can use the methods described here to ensure that prescriptions of interest are assigned consistent drug categories and complete opioid risk variable values. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Adopting Cut Scores: Post-Standard-Setting Panel Considerations for Decision Makers
ERIC Educational Resources Information Center
Geisinger, Kurt F.; McCormick, Carina M.
2010-01-01
Standard-setting studies utilizing procedures such as the Bookmark or Angoff methods are just one component of the complete standard-setting process. Decision makers ultimately must determine what they believe to be the most appropriate standard or cut score to use, employing the input of the standard-setting panelists as one piece of information…
Program Completion and Re-Arrest in a Batterer Intervention System
ERIC Educational Resources Information Center
Bennett, Larry W.; Stoops, Charles; Call, Christine; Flett, Heather
2007-01-01
Objective: The authors examine the effects of batterer intervention program (BIP) completion on domestic violence re-arrest in an urban system of 30 BIPs with a common set of state standards, common program completion criteria, and centralized criminal justice supervision. Method: 899 men arrested for domestic violence were assessed and completed…
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
SparRec: An effective matrix completion framework of missing data imputation for GWAS
NASA Astrophysics Data System (ADS)
Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen
2016-10-01
Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
Uniqueness of the joint measurement and the structure of the set of compatible quantum measurements
NASA Astrophysics Data System (ADS)
Guerini, Leonardo; Terra Cunha, Marcelo
2018-04-01
We address the problem of characterising the compatible tuples of measurements that admit a unique joint measurement. We derive a uniqueness criterion based on the method of perturbations and apply it to show that extremal points of the set of compatible tuples admit a unique joint measurement, while all tuples that admit a unique joint measurement lie in the boundary of such a set. We also provide counter-examples showing that none of these properties are both necessary and sufficient, thus completely describing the relation between the joint measurement uniqueness and the structure of the compatible set. As a by-product of our investigations, we completely characterise the extremal and boundary points of the set of general tuples of measurements and of the subset of compatible tuples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Tuomas P., E-mail: tuomas.rossi@alumni.aalto.fi; Sakko, Arto; Puska, Martti J.
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate thatmore » the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.« less
A Catalog of Molecular Clouds in the Milky Way Galaxy
NASA Astrophysics Data System (ADS)
Wahl, Matthew; Koda, J.
2010-01-01
We have created a complete catalog of molecular clouds in the Milky Way Galaxy. This is an extension of our previous study (Koda et al. 2006) which used a preliminary data set from The Boston University Five College Radio Astronomy Observatory Galactic Ring Survey (BUFCRAO GRS). This work is of the complete data set from this GRS. The data covers the inner part of the northern Galactic disk between galactic longitudes 15 to 56 degrees, galactic latitudes -1.1 to 1.1 degrees, and the entire Galactic velocities. We used the standard cloud identification method. This method searches the data cube for a peak in temperature above a specified value, and then searches around that peak in all directions until the extents of the cloud are found. This method is iterated until all clouds are found. We prefer this method over other methods, because of its simplicity. The properties of our molecular clouds are very similar to those based on a more evolved method (Rathborne et al. 2009).
A complete active space valence bond method with nonorthogonal orbitals
NASA Astrophysics Data System (ADS)
Hirao, Kimihiko; Nakano, Haruyuki; Nakayama, Kenichi
1997-12-01
A complete active space self-consistent field (SCF) wave function is transformed into a valence bond type representation built from nonorthogonal orbitals, each strongly localized on a single atom. Nonorthogonal complete active space SCF orbitals are constructed by Ruedenberg's projected localization procedure so that they have maximal overlaps with the corresponding minimum basis set of atomic orbitals of the free-atoms. The valence bond structures which are composed of such nonorthogonal quasiatomic orbitals constitute the wave function closest to the concept of the oldest and most simple valence bond method. The method is applied to benzene, butadiene, hydrogen, and methane molecules and compared to the previously proposed complete active space valence bond approach with orthogonal orbitals. The results demonstrate the validity of the method as a powerful tool for describing the electronic structure of various molecules.
Cronin, John; Storey, Adam; Zourdos, Michael C.
2016-01-01
ABSTRACT RATINGS OF PERCEIVED EXERTION ARE A VALID METHOD OF ESTIMATING THE INTENSITY OF A RESISTANCE TRAINING EXERCISE OR SESSION. SCORES ARE GIVEN AFTER COMPLETION OF AN EXERCISE OR TRAINING SESSION FOR THE PURPOSES OF ATHLETE MONITORING. HOWEVER, A NEWLY DEVELOPED SCALE BASED ON HOW MANY REPETITIONS ARE REMAINING AT THE COMPLETION OF A SET MAY BE A MORE PRECISE TOOL. THIS APPROACH ADJUSTS LOADS AUTOMATICALLY TO MATCH ATHLETE CAPABILITIES ON A SET-TO-SET BASIS AND MAY MORE ACCURATELY GAUGE INTENSITY AT NEAR-LIMIT LOADS. THIS ARTICLE OUTLINES HOW TO INCORPORATE THIS NOVEL SCALE INTO A TRAINING PLAN. PMID:27531969
Trajectory Optimization for Spacecraft Collision Avoidance
2013-09-01
Modified Set of Equinoctial Orbit Elements . AAS/AIAA 91-524," in Astrodynamics Specialist Conference, Durango, CO, 1991. [18] D. E. Kirk...these singularities, the COE are not necessarily the best set of states for numerical analysis. 2.3.3 Equinoctial Orbital Elements A third method of...completely defining an orbit is by the use of the Equinoctial Orbital Elements . This element set maintains the
ERIC Educational Resources Information Center
Schlosser, Ralf W.; Koul, Rajinder; Shane, Howard; Sorce, James; Brock, Kristofer; Harmon, Ashley; Moerlein, Dorothy; Hearn, Emilia
2014-01-01
Purpose: The effects of animation on naming and identification of graphic symbols for verbs and prepositions were studied in 2 graphic symbol sets in preschoolers. Method: Using a 2 × 2 × 2 × 3 completely randomized block design, preschoolers across three age groups were randomly assigned to combinations of symbol set (Autism Language Program…
Methods for Conducting Cognitive Task Analysis for a Decision Making Task.
1996-01-01
Cognitive task analysis (CTA) improves traditional task analysis procedures by analyzing the thought processes of performers while they complete a...for using these methods to conduct a CTA for domains which involve critical decision making tasks in naturalistic settings. The cognitive task analysis methods
On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN
NASA Astrophysics Data System (ADS)
Patriarchi, P.; Perinotto, M.
The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.
Method And Apparatus For Arbitrarily Large Capacity Removable Media
Milligan, Charles A.; Hughes, James P.; Debiez; Jacques
2003-04-08
A method and apparatus to handle multiple sets of removable media within a storage system. A first set of removable media are mounted on a set of drives. Data is accepted until the first set of removable media is filled. A second set of removable media is mounted on the drives, while the first set of removable media is removed. When the change in removable media is complete, writing of data proceeds on the second set of removable media. Data may be buffered while the change in removable media occurs. Alternatively, two sets of removable media may be mounted at the same time. When the first set of removable media is filled to a selected amount, the second set of removable media may then be used to write the data. A third set of removable media is set up or mounted for use, while the first set of removable media is removed.
Complete denture tooth arrangement technology driven by a reconfigurable rule.
Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun
2018-01-01
The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.
Kirkham, Jamie J; Clarke, Mike; Williamson, Paula R
2017-05-17
Objective To assess the uptake of the rheumatoid arthritis core outcome set using a new assessment method of calculating uptake from data in clinical trial registry entries. Design Review of randomised trials. Setting ClinicalTrials.gov. Subjects 273 randomised trials of drug interventions for the treatment of rheumatoid arthritis and registered in ClinicalTrials.gov between 2002 and 2016. Full publications were identified for completed studies from information in the trial registry or from an internet search using Google and the citation database Web of Science. Main outcome measure The percentage of trials reporting or planning to measure the rheumatoid arthritis core outcome set calculated from the information presented in the trial registry and compared with the percentage reporting the rheumatoid arthritis core outcome set in the resulting trial publications. Results The full rheumatoid arthritis core outcome set was reported in 81% (116/143) of trials identified on the registry as completed (or terminated) for which results were found in either the published literature or the registry. For trials identified on the registry as completed (or terminated), using information only available in the registry gives an estimate for uptake of 77% (145/189). Conclusions The uptake of the rheumatoid arthritis core outcome set in clinical trials has continued to increase over time. Using the information on outcomes listed for completed or terminated studies in a trial registry provides a reasonable estimate of the uptake of a core outcome set and is a more efficient and up-to-date approach than examining the outcomes in published trial reports. The method proposed may provide an efficient approach for an up-to-date assessment of the uptake of the 300 core outcome sets already published. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Requirement Development Process and Tools
NASA Technical Reports Server (NTRS)
Bayt, Robert
2017-01-01
Requirements capture the system-level capabilities in a set of complete, necessary, clear, attainable, traceable, and verifiable statements of need. Requirements should not be unduly restrictive, but should set limits that eliminate items outside the boundaries drawn, encourage competition (or alternatives), and capture source and reason of requirement. If it is not needed by the customer, it is not a requirement. They establish the verification methods that will lead to product acceptance. These must be reproducible assessment methods.
Soong, David T.; Over, Thomas M.
2015-01-01
Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
Density Functional O(N) Calculations
NASA Astrophysics Data System (ADS)
Ordejón, Pablo
1998-03-01
We have developed a scheme for performing Density Functional Theory calculations with O(N) scaling.(P. Ordejón, E. Artacho and J. M. Soler, Phys. Rev. B, 53), 10441 (1996) The method uses arbitrarily flexible and complete Atomic Orbitals (AO) basis sets. This gives a wide range of choice, from extremely fast calculations with minimal basis sets, to greatly accurate calculations with complete sets. The size-efficiency of AO bases, together with the O(N) scaling of the algorithm, allow the application of the method to systems with many hundreds of atoms, in single processor workstations. I will present the SIESTA code,(D. Sanchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quantum Chem., 65), 453 (1997) in which the method is implemented, with several LDA, LSD and GGA functionals available, and using norm-conserving, non-local pseudopotentials (in the Kleinman-Bylander form) to eliminate the core electrons. The calculation of static properties such as energies, forces, pressure, stress and magnetic moments, as well as molecular dynamics (MD) simulations capabilities (including variable cell shape, constant temperature and constant pressure MD) are fully implemented. I will also show examples of the accuracy of the method, and applications to large-scale materials and biomolecular systems.
Accurate Phylogenetic Tree Reconstruction from Quartets: A Heuristic Approach
Reaz, Rezwana; Bayzid, Md. Shamsuzzoha; Rahman, M. Sohel
2014-01-01
Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A ‘quartet’ is an unrooted tree over taxa, hence the quartet-based supertree methods combine many -taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets. PMID:25117474
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
ERIC Educational Resources Information Center
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
2012-01-01
Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods. PMID:22876834
The hydrolysis of proteins by microwave energy
Margolis, Sam A.; Jassie, Lois; Kingston, H. M.
1991-01-01
Microwave energy, at manually-adjusted, partial power settings has been used to hydrolyse bovine serum albumin at 125 °C. Hydrolysis was complete within 2 h, except for valine and isoleucine which were completely liberated within 4 h. The aminoacid destruction was less than that observed at similar hydrolysis conditions with other methods and complete hydrolysis was achieved more rapidly. These results provide a basis for automating the process of amino-acid hydrolysis. PMID:18924889
de Sanctis, Daniele; Nanao, Max H
2012-09-01
Specific radiation damage can be used for the phasing of macromolecular crystal structures. In practice, however, the optimization of the X-ray dose used to `burn' the crystal to induce specific damage can be difficult. Here, a method is presented in which a single large data set that has not been optimized in any way for radiation-damage-induced phasing (RIP) is segmented into multiple sub-data sets, which can then be used for RIP. The efficacy of this method is demonstrated using two model systems and two test systems. A method to improve the success of this type of phasing experiment by varying the composition of the two sub-data sets with respect to their separation by image number, and hence by absorbed dose, as well as their individual completeness is illustrated.
McCarty, L Kelsey; Saddawi-Konefka, Daniel; Gargan, Lauren M; Driscoll, William D; Walsh, John L; Peterfreund, Robert A
2014-12-01
Process improvement in healthcare delivery settings can be difficult, even when there is consensus among clinicians about a clinical practice or desired outcome. Airway management is a medical intervention fundamental to the delivery of anesthesia care. Like other medical interventions, a detailed description of the management methods should be documented. Despite this expectation, airway documentation is often insufficient. The authors hypothesized that formal adoption of process improvement methods could be used to increase the rate of "complete" airway management documentation. The authors defined a set of criteria as a local practice standard of "complete" airway management documentation. The authors then employed selected process improvement methodologies over 13 months in three iterative and escalating phases to increase the percentage of records with complete documentation. The criteria were applied retrospectively to determine the baseline frequency of complete records, and prospectively to measure the impact of process improvements efforts over the three phases of implementation. Immediately before the initial intervention, a retrospective review of 23,011 general anesthesia cases over 6 months showed that 13.2% of patient records included complete documentation. At the conclusion of the 13-month improvement effort, documentation improved to a completion rate of 91.6% (P<0.0001). During the subsequent 21 months, the completion rate was sustained at an average of 90.7% (SD, 0.9%) across 82,571 general anesthetic records. Systematic application of process improvement methodologies can improve airway documentation and may be similarly effective in improving other areas of anesthesia clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Benchmarking of Methods for Genomic Taxonomy
Larsen, Mette V.; Cosentino, Salvatore; Lukjancenko, Oksana; ...
2014-02-26
One of the first issues that emerges when a prokaryotic organism of interest is encountered is the question of what it is—that is, which species it is. The 16S rRNA gene formed the basis of the first method for sequence-based taxonomy and has had a tremendous impact on the field of microbiology. Nevertheless, the method has been found to have a number of shortcomings. In this paper, we trained and benchmarked five methods for whole-genome sequence-based prokaryotic species identification on a common data set of complete genomes: (i) SpeciesFinder, which is based on the complete 16S rRNA gene; (ii) Reads2Typemore » that searches for species-specific 50-mers in either the 16S rRNA gene or the gyrB gene (for the Enterobacteraceae family); (iii) the ribosomal multilocus sequence typing (rMLST) method that samples up to 53 ribosomal genes; (iv) TaxonomyFinder, which is based on species-specific functional protein domain profiles; and finally (v) KmerFinder, which examines the number of cooccurring k-mers (substrings of k nucleotides in DNA sequence data). The performances of the methods were subsequently evaluated on three data sets of short sequence reads or draft genomes from public databases. In total, the evaluation sets constituted sequence data from more than 11,000 isolates covering 159 genera and 243 species. Our results indicate that methods that sample only chromosomal, core genes have difficulties in distinguishing closely related species which only recently diverged. Finally, the KmerFinder method had the overall highest accuracy and correctly identified from 93% to 97% of the isolates in the evaluations sets.« less
Clustering, Seriation, and Subset Extraction of Confusion Data
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2006-01-01
The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…
ERIC Educational Resources Information Center
Becker, Kristin A.
2013-01-01
The purpose of this concurrent mixed-method study was to explore how special education intern teachers, placed in an urban secondary special education school setting developed an ability to implement content literacy strategies after completion of a professional development graduate seminar and internship experience. This was done by studying both…
A neural network gravitational arc finder based on the Mediatrix filamentation method
NASA Astrophysics Data System (ADS)
Bom, C. R.; Makler, M.; Albuquerque, M. P.; Brandt, C. H.
2017-01-01
Context. Automated arc detection methods are needed to scan the ongoing and next-generation wide-field imaging surveys, which are expected to contain thousands of strong lensing systems. Arc finders are also required for a quantitative comparison between predictions and observations of arc abundance. Several algorithms have been proposed to this end, but machine learning methods have remained as a relatively unexplored step in the arc finding process. Aims: In this work we introduce a new arc finder based on pattern recognition, which uses a set of morphological measurements that are derived from the Mediatrix filamentation method as entries to an artificial neural network (ANN). We show a full example of the application of the arc finder, first training and validating the ANN on simulated arcs and then applying the code on four Hubble Space Telescope (HST) images of strong lensing systems. Methods: The simulated arcs use simple prescriptions for the lens and the source, while mimicking HST observational conditions. We also consider a sample of objects from HST images with no arcs in the training of the ANN classification. We use the training and validation process to determine a suitable set of ANN configurations, including the combination of inputs from the Mediatrix method, so as to maximize the completeness while keeping the false positives low. Results: In the simulations the method was able to achieve a completeness of about 90% with respect to the arcs that are input into the ANN after a preselection. However, this completeness drops to 70% on the HST images. The false detections are on the order of 3% of the objects detected in these images. Conclusions: The combination of Mediatrix measurements with an ANN is a promising tool for the pattern-recognition phase of arc finding. More realistic simulations and a larger set of real systems are needed for a better training and assessment of the efficiency of the method.
Filaments from the galaxy distribution and from the velocity field in the local universe
NASA Astrophysics Data System (ADS)
Libeskind, Noam I.; Tempel, Elmo; Hoffman, Yehuda; Tully, R. Brent; Courtois, Hélène
2015-10-01
The cosmic web that characterizes the large-scale structure of the Universe can be quantified by a variety of methods. For example, large redshift surveys can be used in combination with point process algorithms to extract long curvilinear filaments in the galaxy distribution. Alternatively, given a full 3D reconstruction of the velocity field, kinematic techniques can be used to decompose the web into voids, sheets, filaments and knots. In this Letter, we look at how two such algorithms - the Bisous model and the velocity shear web - compare with each other in the local Universe (within 100 Mpc), finding good agreement. This is both remarkable and comforting, given that the two methods are radically different in ideology and applied to completely independent and different data sets. Unsurprisingly, the methods are in better agreement when applied to unbiased and complete data sets, like cosmological simulations, than when applied to observational samples. We conclude that more observational data is needed to improve on these methods, but that both methods are most likely properly tracing the underlying distribution of matter in the Universe.
Fox, Aaron S; Bonacci, Jason; McLean, Scott G; Spittle, Michael; Saunders, Natalie
2016-05-01
Laboratory-based measures provide an accurate method to identify risk factors for anterior cruciate ligament (ACL) injury; however, these methods are generally prohibitive to the wider community. Screening methods that can be completed in a field or clinical setting may be more applicable for wider community use. Examination of field-based screening methods for ACL injury risk can aid in identifying the most applicable method(s) for use in these settings. The objective of this systematic review was to evaluate and compare field-based screening methods for ACL injury risk to determine their efficacy of use in wider community settings. An electronic database search was conducted on the SPORTDiscus™, MEDLINE, AMED and CINAHL databases (January 1990-July 2015) using a combination of relevant keywords. A secondary search of the same databases, using relevant keywords from identified screening methods, was also undertaken. Studies identified as potentially relevant were independently examined by two reviewers for inclusion. Where consensus could not be reached, a third reviewer was consulted. Original research articles that examined screening methods for ACL injury risk that could be undertaken outside of a laboratory setting were included for review. Two reviewers independently assessed the quality of included studies. Included studies were categorized according to the screening method they examined. A description of each screening method, and data pertaining to the ability to prospectively identify ACL injuries, validity and reliability, recommendations for identifying 'at-risk' athletes, equipment and training required to complete screening, time taken to screen athletes, and applicability of the screening method across sports and athletes were extracted from relevant studies. Of 1077 citations from the initial search, a total of 25 articles were identified as potentially relevant, with 12 meeting all inclusion/exclusion criteria. From the secondary search, eight further studies met all criteria, resulting in 20 studies being included for review. Five ACL-screening methods-the Landing Error Scoring System (LESS), Clinic-Based Algorithm, Observational Screening of Dynamic Knee Valgus (OSDKV), 2D-Cam Method, and Tuck Jump Assessment-were identified. There was limited evidence supporting the use of field-based screening methods in predicting ACL injuries across a range of populations. Differences relating to the equipment and time required to complete screening methods were identified. Only screening methods for ACL injury risk were included for review. Field-based screening methods developed for lower-limb injury risk in general may also incorporate, and be useful in, screening for ACL injury risk. Limited studies were available relating to the OSDKV and 2D-Cam Method. The LESS showed predictive validity in identifying ACL injuries, however only in a youth athlete population. The LESS also appears practical for community-wide use due to the minimal equipment and set-up/analysis time required. The Clinic-Based Algorithm may have predictive value for ACL injury risk as it identifies athletes who exhibit high frontal plane knee loads during a landing task, but requires extensive additional equipment and time, which may limit its application to wider community settings.
Completed Beltrami-Michell Formulation in Polar Coordinates
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2005-01-01
A set of conditions had not been formulated on the boundary of an elastic continuum since the time of Saint-Venant. This limitation prevented the formulation of a direct stress calculation method in elasticity for a continuum with a displacement boundary condition. The missed condition, referred to as the boundary compatibility condition, is now formulated in polar coordinates. The augmentation of the new condition completes the Beltrami-Michell formulation in polar coordinates. The completed formulation that includes equilibrium equations and a compatibility condition in the field as well as the traction and boundary compatibility condition is derived from the stationary condition of the variational functional of the integrated force method. The new method is illustrated by solving an example of a mixed boundary value problem for mechanical as well as thermal loads.
Methods of determining complete sensor requirements for autonomous mobility
NASA Technical Reports Server (NTRS)
Curtis, Steven A. (Inventor)
2012-01-01
A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.
Complete set of invariants of a 4th order tensor: the 12 tasks of HARDI from ternary quartics.
Papadopoulo, Théo; Ghosh, Aurobrata; Deriche, Rachid
2014-01-01
Invariants play a crucial role in Diffusion MRI. In DTI (2nd order tensors), invariant scalars (FA, MD) have been successfully used in clinical applications. But DTI has limitations and HARDI models (e.g. 4th order tensors) have been proposed instead. These, however, lack invariant features and computing them systematically is challenging. We present a simple and systematic method to compute a functionally complete set of invariants of a non-negative 3D 4th order tensor with respect to SO3. Intuitively, this transforms the tensor's non-unique ternary quartic (TQ) decomposition (from Hilbert's theorem) to a unique canonical representation independent of orientation - the invariants. The method consists of two steps. In the first, we reduce the 18 degrees-of-freedom (DOF) of a TQ representation by 3-DOFs via an orthogonal transformation. This transformation is designed to enhance a rotation-invariant property of choice of the 3D 4th order tensor. In the second, we further reduce 3-DOFs via a 3D rotation transformation of coordinates to arrive at a canonical set of invariants to SO3 of the tensor. The resulting invariants are, by construction, (i) functionally complete, (ii) functionally irreducible (if desired), (iii) computationally efficient and (iv) reversible (mappable to the TQ coefficients or shape); which is the novelty of our contribution in comparison to prior work. Results from synthetic and real data experiments validate the method and indicate its importance.
Endoscopic resection of subepithelial tumors.
Schmidt, Arthur; Bauder, Markus; Riecken, Bettina; Caca, Karel
2014-12-16
Management of subepithelial tumors (SETs) remains challenging. Endoscopic ultrasound (EUS) has improved differential diagnosis of these tumors but a definitive diagnosis on EUS findings alone can be achieved in the minority of cases. Complete endoscopic resection may provide a reasonable approach for tissue acquisition and may also be therapeutic in case of malignant lesions. Small SET restricted to the submucosa can be removed with established basic resection techniques. However, resection of SET arising from deeper layers of the gastrointestinal wall requires advanced endoscopic methods and harbours the risk of perforation. Innovative techniques such as submucosal tunneling and full thickness resection have expanded the frontiers of endoscopic therapy in the past years. This review will give an overview about endoscopic resection techniques of SET with a focus on novel methods.
Method of immobilizing water-soluble bioorganic compounds on a capillary-porous carrier
Ershov, Gennady Moiseevich; Timofeev, Eduard Nikolaevich; Ivanov, Igor Borisovich; Florentiev, Vladimir Leonidovich; Mirzabekov, Andrei Darievich
1998-01-01
The method for immobilizing water-soluble bioorganic compounds to capillary-porous carrier comprises application of solutions of water-soluble bioorganic compounds onto a capillary-porous carrier, setting the carrier temperature equal to or below the dew point of the ambient air, keeping the carrier till appearance of water condensate and complete swelling of the carrier, whereupon the carrier surface is coated with a layer of water-immiscible nonluminescent inert oil and is allowed to stand till completion of the chemical reaction of bonding the bioorganic compounds with the carrier.
Down-weighting overlapping genes improves gene set analysis
2012-01-01
Background The identification of gene sets that are significantly impacted in a given condition based on microarray data is a crucial step in current life science research. Most gene set analysis methods treat genes equally, regardless how specific they are to a given gene set. Results In this work we propose a new gene set analysis method that computes a gene set score as the mean of absolute values of weighted moderated gene t-scores. The gene weights are designed to emphasize the genes appearing in few gene sets, versus genes that appear in many gene sets. We demonstrate the usefulness of the method when analyzing gene sets that correspond to the KEGG pathways, and hence we called our method Pathway Analysis with Down-weighting of Overlapping Genes (PADOG). Unlike most gene set analysis methods which are validated through the analysis of 2-3 data sets followed by a human interpretation of the results, the validation employed here uses 24 different data sets and a completely objective assessment scheme that makes minimal assumptions and eliminates the need for possibly biased human assessments of the analysis results. Conclusions PADOG significantly improves gene set ranking and boosts sensitivity of analysis using information already available in the gene expression profiles and the collection of gene sets to be analyzed. The advantages of PADOG over other existing approaches are shown to be stable to changes in the database of gene sets to be analyzed. PADOG was implemented as an R package available at: http://bioinformaticsprb.med.wayne.edu/PADOG/or http://www.bioconductor.org. PMID:22713124
Graph-state formalism for mutually unbiased bases
NASA Astrophysics Data System (ADS)
Spengler, Christoph; Kraus, Barbara
2013-11-01
A pair of orthonormal bases is called mutually unbiased if all mutual overlaps between any element of one basis and an arbitrary element of the other basis coincide. In case the dimension, d, of the considered Hilbert space is a power of a prime number, complete sets of d+1 mutually unbiased bases (MUBs) exist. Here we present a method based on the graph-state formalism to construct such sets of MUBs. We show that for n p-level systems, with p being prime, one particular graph suffices to easily construct a set of pn+1 MUBs. In fact, we show that a single n-dimensional vector, which is associated with this graph, can be used to generate a complete set of MUBs and demonstrate that this vector can be easily determined. Finally, we discuss some advantages of our formalism regarding the analysis of entanglement structures in MUBs, as well as experimental realizations.
Todd Rogers, W; Docherty, David; Petersen, Stewart
2014-01-01
The bookmark method for setting cut-scores was used to re-set the cut-score for the Canadian Forces Firefighter Physical Fitness Maintenance Evaluation (FF PFME). The time required to complete 10 tasks that together simulate a first-response firefighting emergency was accepted as a measure of work capacity. A panel of 25 Canadian Forces firefighter supervisors set cut-scores in three rounds. Each round involved independent evaluation of nine video work samples, where the times systematically increased from 400 seconds to 560 seconds. Results for Round 1 were discussed before moving to Round 2 and results for Round 2 were discussed before moving to Round 3. Accounting for the variability among panel members at the end of Round 3, a cut-score of 481 seconds (mean Round 3 plus 2 SEM) was recommended. Firefighters who complete the FF PFME in 481 seconds or less have the physical capacity to complete first-response firefighting work.
ERIC Educational Resources Information Center
Timmerman, M. C.
2004-01-01
Objective: To explore the impact of the school climate on adolescents' reporting of sexual harassment. Design: A quantitative survey among students in their 4th year of secondary education. Setting: Questionnaires were completed in a class setting. Method: An a-select sampling strategy was used to select 2808 students in 22 schools. Results:…
ERIC Educational Resources Information Center
Pelham, William E.; Waxmonsky, James G.; Schentag, Jerome; Ballow, Charles H.; Panahon, Carlos J.; Gnagy, Elizabeth M.; Hoffman, Martin T.; Burrows-MacLean, Lisa; Meichenbaum, David L.; Forehand, Gregory L.; Fabiano, Gregory A.; Tresco, Katy E.; Lopez-Williams, Andy; Coles, Erika K.; Gonzalez, Mario A.
2011-01-01
Objective: To test the efficacy and tolerability of the methylphenidate transdermal formulation (MTS) against immediate-release methylphenidate (IR MPH) and placebo in a 12-hr analog classroom setting. Method: A total of nine boys ages 6 to 9 years, medicated with MPH for ADHD, complete a within-subject, double-blind study. For the purpose of the…
ERIC Educational Resources Information Center
Smith, Ryan C.; Bowdring, Molly A.; Geller, E. Scott
2015-01-01
Objective: The determinants of alcohol consumption among university students were investigated in a downtown field setting with blood alcohol content (BAC) as the dependent variable. Participants: In total, 521 participants completed a brief survey and had their BAC assessed during April 2013. Methods: Between 10:00 pm and 2:00 am, teams of…
Optimized diffusion gradient orientation schemes for corrupted clinical DTI data sets.
Dubois, J; Poupon, C; Lethimonnier, F; Le Bihan, D
2006-08-01
A method is proposed for generating schemes of diffusion gradient orientations which allow the diffusion tensor to be reconstructed from partial data sets in clinical DT-MRI, should the acquisition be corrupted or terminated before completion because of patient motion. A general energy-minimization electrostatic model was developed in which the interactions between orientations are weighted according to their temporal order during acquisition. In this report, two corruption scenarios were specifically considered for generating relatively uniform schemes of 18 and 60 orientations, with useful subsets of 6 and 15 orientations. The sets and subsets were compared to conventional sets through their energy, condition number and rotational invariance. Schemes of 18 orientations were tested on a volunteer. The optimized sets were similar to uniform sets in terms of energy, condition number and rotational invariance, whether the complete set or only a subset was considered. Diffusion maps obtained in vivo were close to those for uniform sets whatever the acquisition time was. This was not the case with conventional schemes, whose subset uniformity was insufficient. With the proposed approach, sets of orientations responding to several corruption scenarios can be generated, which is potentially useful for imaging uncooperative patients or infants.
A study on suppressing transmittance fluctuations for air-gapped Glan-type polarizing prisms
NASA Astrophysics Data System (ADS)
Zhang, Chuanfa; Li, Dailin; Zhu, Huafeng; Li, Chuanzhi; Jiao, Zhiyong; Wang, Ning; Xu, Zhaopeng; Wang, Xiumin; Song, Lianke
2018-05-01
Light intensity transmittance is a key parameter for the design of polarizing prisms, while sometimes its experimental curves based on spatial incident angle presents periodical fluctuations. Here, we propose a novel method for completely suppressing these fluctuations via setting a glued error angle in the air gap of Glan-Taylor prisms. The proposal consists of: an accurate formula of the intensity transmittance for Glan-Taylor prisms, a numerical simulation and a contrast experiment of Glan-Taylor prisms for analyzing the causes of the fluctuations, and a simple method for accurately measuring the glued error angle. The result indicates that when the setting glued error angle is larger than the critical angle for a certain polarizing prism, the fluctuations can be completely suppressed, and a smooth intensity transmittance curve can be obtained. Besides, the critical angle in the air gap for suppressing the fluctuations is decreased with the increase of beam spot size. This method has the advantage of having less demand for the prism position in optical systems.
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Cain, Bryce; Sondergeld, Toni A.; Alvim, Henrique G.; Slager, Emily M.
2015-01-01
Achieving respectable response rates to surveys on university campuses has become increasingly more difficult, which can increase non-response error and jeopardize the integrity of data. Prior research has focused on investigating the effect of a single or small set of factors on college students' decision to complete surveys. We used a concurrent…
Multivariate missing data in hydrology - Review and applications
NASA Astrophysics Data System (ADS)
Ben Aissia, Mohamed-Aymen; Chebana, Fateh; Ouarda, Taha B. M. J.
2017-12-01
Water resources planning and management require complete data sets of a number of hydrological variables, such as flood peaks and volumes. However, hydrologists are often faced with the problem of missing data (MD) in hydrological databases. Several methods are used to deal with the imputation of MD. During the last decade, multivariate approaches have gained popularity in the field of hydrology, especially in hydrological frequency analysis (HFA). However, treating the MD remains neglected in the multivariate HFA literature whereas the focus has been mainly on the modeling component. For a complete analysis and in order to optimize the use of data, MD should also be treated in the multivariate setting prior to modeling and inference. Imputation of MD in the multivariate hydrological framework can have direct implications on the quality of the estimation. Indeed, the dependence between the series represents important additional information that can be included in the imputation process. The objective of the present paper is to highlight the importance of treating MD in multivariate hydrological frequency analysis by reviewing and applying multivariate imputation methods and by comparing univariate and multivariate imputation methods. An application is carried out for multiple flood attributes on three sites in order to evaluate the performance of the different methods based on the leave-one-out procedure. The results indicate that, the performance of imputation methods can be improved by adopting the multivariate setting, compared to mean substitution and interpolation methods, especially when using the copula-based approach.
Expanding Assessment Methods and Moments in History
ERIC Educational Resources Information Center
Frost, Jennifer; de Pont, Genevieve; Brailsford, Ian
2012-01-01
History courses at The University of Auckland are typically assessed at two or three moments during a semester. The methods used normally employ two essays and a written examination answering questions set by the lecturer. This study describes an assessment innovation in 2008 that expanded both the frequency and variety of activities completed by…
Kalinowski, Jarosław A.; Makal, Anna; Coppens, Philip
2011-01-01
A new method for determination of the orientation matrix of Laue X-ray data is presented. The method is based on matching of the experimental patterns of central reciprocal lattice rows projected on a unit sphere centered on the origin of the reciprocal lattice with the corresponding pattern of a monochromatic data set on the same material. This technique is applied to the complete data set and thus eliminates problems often encountered when single frames with a limited number of peaks are to be used for orientation matrix determination. Application of the method to a series of Laue data sets on organometallic crystals is described. The corresponding program is available under a Mozilla Public License-like open-source license. PMID:22199400
Effect of missing data on multitask prediction methods.
de la Vega de León, Antonio; Chen, Beining; Gillet, Valerie J
2018-05-22
There has been a growing interest in multitask prediction in chemoinformatics, helped by the increasing use of deep neural networks in this field. This technique is applied to multitarget data sets, where compounds have been tested against different targets, with the aim of developing models to predict a profile of biological activities for a given compound. However, multitarget data sets tend to be sparse; i.e., not all compound-target combinations have experimental values. There has been little research on the effect of missing data on the performance of multitask methods. We have used two complete data sets to simulate sparseness by removing data from the training set. Different models to remove the data were compared. These sparse sets were used to train two different multitask methods, deep neural networks and Macau, which is a Bayesian probabilistic matrix factorization technique. Results from both methods were remarkably similar and showed that the performance decrease because of missing data is at first small before accelerating after large amounts of data are removed. This work provides a first approximation to assess how much data is required to produce good performance in multitask prediction exercises.
Short, Michelle A.; Gradisar, Michael; Wright, Helen; Lack, Leon C.; Dohnt, Hayley; Carskadon, Mary A.
2011-01-01
Study Objectives: To determine the proportion of adolescents whose bedtime is set by their parents and to evaluate whether parent-set bedtimes are associated with earlier bedtimes, more sleep, and better daytime functioning. Participants: 385 adolescents aged 13-18 years (mean = 15.6, SD = 0.95; 60% male) from 8 socioeconomically diverse schools in South Australia. Measurements & Methods: Adolescents completed the School Sleep Habits Survey during class time and then completed an 8-day Sleep Diary. The Flinders Fatigue Scale was completed on the final day of the study. Results: 17.5% of adolescents reported a parent-set bedtime as the main factor determining their bedtime on school nights. Compared to adolescents without parent-set bedtimes, those with parent-set bedtimes had earlier bedtimes, obtained more sleep, and experienced improved daytime wakefulness and less fatigue. They did not differ significantly in terms of time taken to fall asleep. When parent-set bedtimes were removed on weekends, sleep patterns did not significantly differ between groups. Conclusions: Significant personal and public health issues, such as depression and accidental injury and mortality, are associated with insufficient sleep. Converging biological and psychosocial factors mean that adolescence is a period of heightened risk. Parent-set bedtimes offer promise as a simple and easily translatable means for parents to improve the sleep and daytime functioning of their teens. Citation: Short MA; Gradisar M; Wright H; Lack LC; Dohnt H; Carskadon MA. Time for bed: parent-set bedtimes associated with improved sleep and daytime functioning in adolescents. SLEEP 2011;34(6):797-800. PMID:21629368
NASA Technical Reports Server (NTRS)
Graham, D. E.; Overbeek, R.; Olsen, G. J.; Woese, C. R.
2000-01-01
Comparisons of complete genome sequences allow the most objective and comprehensive descriptions possible of a lineage's evolution. This communication uses the completed genomes from four major euryarchaeal taxa to define a genomic signature for the Euryarchaeota and, by extension, the Archaea as a whole. The signature is defined in terms of the set of protein-encoding genes found in at least two diverse members of the euryarchaeal taxa that function uniquely within the Archaea; most signature proteins have no recognizable bacterial or eukaryal homologs. By this definition, 351 clusters of signature proteins have been identified. Functions of most proteins in this signature set are currently unknown. At least 70% of the clusters that contain proteins from all the euryarchaeal genomes also have crenarchaeal homologs. This conservative set, which appears refractory to horizontal gene transfer to the Bacteria or the Eukarya, would seem to reflect the significant innovations that were unique and fundamental to the archaeal "design fabric." Genomic protein signature analysis methods may be extended to characterize the evolution of any phylogenetically defined lineage. The complete set of protein clusters for the archaeal genomic signature is presented as supplementary material (see the PNAS web site, www.pnas.org).
Endoscopic resection of subepithelial tumors
Schmidt, Arthur; Bauder, Markus; Riecken, Bettina; Caca, Karel
2014-01-01
Management of subepithelial tumors (SETs) remains challenging. Endoscopic ultrasound (EUS) has improved differential diagnosis of these tumors but a definitive diagnosis on EUS findings alone can be achieved in the minority of cases. Complete endoscopic resection may provide a reasonable approach for tissue acquisition and may also be therapeutic in case of malignant lesions. Small SET restricted to the submucosa can be removed with established basic resection techniques. However, resection of SET arising from deeper layers of the gastrointestinal wall requires advanced endoscopic methods and harbours the risk of perforation. Innovative techniques such as submucosal tunneling and full thickness resection have expanded the frontiers of endoscopic therapy in the past years. This review will give an overview about endoscopic resection techniques of SET with a focus on novel methods. PMID:25512768
ERIC Educational Resources Information Center
Arias Ortiz, Elena; Dehon, Catherine
2013-01-01
In this paper we study the factors that influence both dropout and (4-year) degree completion throughout university by applying the set of discrete-time methods for competing risks in event history analysis, as described in Scott and Kennedy (2005). In the French-speaking Belgian community, participation rates are very high given that higher…
Multi-criteria evaluation methods in the production scheduling
NASA Astrophysics Data System (ADS)
Kalinowski, K.; Krenczyk, D.; Paprocka, I.; Kempa, W.; Grabowik, C.
2016-08-01
The paper presents a discussion on the practical application of different methods of multi-criteria evaluation in the process of scheduling in manufacturing systems. Among the methods two main groups are specified: methods based on the distance function (using metacriterion) and methods that create a Pareto set of possible solutions. The basic criteria used for scheduling were also described. The overall procedure of evaluation process in production scheduling was presented. It takes into account the actions in the whole scheduling process and human decision maker (HDM) participation. The specified HDM decisions are related to creating and editing a set of evaluation criteria, selection of multi-criteria evaluation method, interaction in the searching process, using informal criteria and making final changes in the schedule for implementation. According to need, process scheduling may be completely or partially automated. Full automatization is possible in case of metacriterion based objective function and if Pareto set is selected - the final decision has to be done by HDM.
McKisson, John E.; Barbosa, Fernando
2015-09-01
A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.
Classifying with confidence from incomplete information.
Parrish, Nathan; Anderson, Hyrum S.; Gupta, Maya R.; ...
2013-12-01
For this paper, we consider the problem of classifying a test sample given incomplete information. This problem arises naturally when data about a test sample is collected over time, or when costs must be incurred to compute the classification features. For example, in a distributed sensor network only a fraction of the sensors may have reported measurements at a certain time, and additional time, power, and bandwidth is needed to collect the complete data to classify. A practical goal is to assign a class label as soon as enough data is available to make a good decision. We formalize thismore » goal through the notion of reliability—the probability that a label assigned given incomplete data would be the same as the label assigned given the complete data, and we propose a method to classify incomplete data only if some reliability threshold is met. Our approach models the complete data as a random variable whose distribution is dependent on the current incomplete data and the (complete) training data. The method differs from standard imputation strategies in that our focus is on determining the reliability of the classification decision, rather than just the class label. We show that the method provides useful reliability estimates of the correctness of the imputed class labels on a set of experiments on time-series data sets, where the goal is to classify the time-series as early as possible while still guaranteeing that the reliability threshold is met.« less
A new method in accelerating PROPELLER MRI.
Li, Bing Keong; D'Arcy, Michael; Weber, Ewald; Crozier, Stuart
2008-01-01
In this work, a new method has been proposed to accelerate the PROPELLER MRI operation. The proposed method uses a rotary phased array coil and a new method in acquiring the k-space strips and preparing the complete k-space trajectories data set. It is numerically shown that for a 12 strips PROPELLER MR brain imaging sequence, the operation time can be reduced by four folds, with no apparent loss in the image quality.
Comprehensive analysis of orthologous protein domains using the HOPS database.
Storm, Christian E V; Sonnhammer, Erik L L
2003-10-01
One of the most reliable methods for protein function annotation is to transfer experimentally known functions from orthologous proteins in other organisms. Most methods for identifying orthologs operate on a subset of organisms with a completely sequenced genome, and treat proteins as single-domain units. However, it is well known that proteins are often made up of several independent domains, and there is a wealth of protein sequences from genomes that are not completely sequenced. A comprehensive set of protein domain families is found in the Pfam database. We wanted to apply orthology detection to Pfam families, but first some issues needed to be addressed. First, orthology detection becomes impractical and unreliable when too many species are included. Second, shorter domains contain less information. It is therefore important to assess the quality of the orthology assignment and avoid very short domains altogether. We present a database of orthologous protein domains in Pfam called HOPS: Hierarchical grouping of Orthologous and Paralogous Sequences. Orthology is inferred in a hierarchic system of phylogenetic subgroups using ortholog bootstrapping. To avoid the frequent errors stemming from horizontally transferred genes in bacteria, the analysis is presently limited to eukaryotic genes. The results are accessible in the graphical browser NIFAS, a Java tool originally developed for analyzing phylogenetic relations within Pfam families. The method was tested on a set of curated orthologs with experimentally verified function. In comparison to tree reconciliation with a complete species tree, our approach finds significantly more orthologs in the test set. Examples for investigating gene fusions and domain recombination using HOPS are given.
ERIC Educational Resources Information Center
Roth-Yousey, Lori; Chu, Yen Li; Reicks, Marla
2012-01-01
Objective: To understand parent beverage expectations for early adolescents (EAs) by eating occasion at home and in various settings. Methods: Descriptive study using focus group interviews and the constant comparative method for qualitative data analysis. Results: Six focus groups were completed, and 2 were conducted in Spanish. Participants (n =…
Illinois' Forests, 2005: Statistics, Methods, and Quality Assurance
Susan J. Crocker; Charles J. Barnett; Mark A. Hatfield
2013-01-01
The first full annual inventory of Illinois' forests was completed in 2005. This report contains 1) descriptive information on methods, statistics, and quality assurance of data collection, 2) a glossary of terms, 3) tables that summarize quality assurance, and 4) a core set of tabular estimates for a variety of forest resources. A detailed analysis of inventory...
DOT National Transportation Integrated Search
1974-06-01
The report synthesizes a set of satellite communications systems configurations to provide services to aircraft flying oceanic routes. These configurations are combined with access control methods to form complete systems. These systems are analyzed ...
Does user-centred design affect the efficiency, usability and safety of CPOE order sets?
Chan, Julie; Shojania, Kaveh G; Easty, Anthony C
2011-01-01
Background Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. Objective We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). Participants 27staff physicians, residents and medical students. Setting Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Main Measures Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). Results 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. Conclusions The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation. PMID:21486886
Algorithms for Solvents and Spectral Factors of Matrix Polynomials
1981-01-01
spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right
ERIC Educational Resources Information Center
Oakley, Charlotte B.; Knight, Kathy; Hobbs, Margie; Dodd, Lacy M.; Cole, Janie
2011-01-01
Purpose/Objectives: The purpose of this investigation was to complete a formal evaluation of a project that provided specialized training for school nutrition (SN) administrators and managers on meeting children's special dietary needs in the school setting. Methods: The training was provided as part of the "Eating Good and Moving Like We…
Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2012-01-01
Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768
Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer.
Bromberg, Raquel; Grishin, Nick V; Otwinowski, Zbyszek
2016-06-01
Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz.
Phylogeny Reconstruction with Alignment-Free Method That Corrects for Horizontal Gene Transfer
Grishin, Nick V.; Otwinowski, Zbyszek
2016-01-01
Advances in sequencing have generated a large number of complete genomes. Traditionally, phylogenetic analysis relies on alignments of orthologs, but defining orthologs and separating them from paralogs is a complex task that may not always be suited to the large datasets of the future. An alternative to traditional, alignment-based approaches are whole-genome, alignment-free methods. These methods are scalable and require minimal manual intervention. We developed SlopeTree, a new alignment-free method that estimates evolutionary distances by measuring the decay of exact substring matches as a function of match length. SlopeTree corrects for horizontal gene transfer, for composition variation and low complexity sequences, and for branch-length nonlinearity caused by multiple mutations at the same site. We tested SlopeTree on 495 bacteria, 73 archaea, and 72 strains of Escherichia coli and Shigella. We compared our trees to the NCBI taxonomy, to trees based on concatenated alignments, and to trees produced by other alignment-free methods. The results were consistent with current knowledge about prokaryotic evolution. We assessed differences in tree topology over different methods and settings and found that the majority of bacteria and archaea have a core set of proteins that evolves by descent. In trees built from complete genomes rather than sets of core genes, we observed some grouping by phenotype rather than phylogeny, for instance with a cluster of sulfur-reducing thermophilic bacteria coming together irrespective of their phyla. The source-code for SlopeTree is available at: http://prodata.swmed.edu/download/pub/slopetree_v1/slopetree.tar.gz. PMID:27336403
ERIC Educational Resources Information Center
Brown, Ryan; Ernst, Jeremy; Clark, Aaron; DeLuca, Bill; Kelly, Daniel
2017-01-01
Educators who engage in best practices utilize a variety of instructional delivery methods to assist all learners in achieving success in concept mastery. Best practices help educators set expectations for completing activities/lessons/projects/units, differentiate instruction, integrate curricula, and provide active learning opportunities for…
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
NASA Astrophysics Data System (ADS)
Tsogbayar, Tsednee; Yeager, Danny L.
2017-01-01
We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.
Method and system for benchmarking computers
Gustafson, John L.
1993-09-14
A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.
The any particle molecular orbital grid-based Hartree-Fock (APMO-GBHF) approach
NASA Astrophysics Data System (ADS)
Posada, Edwin; Moncada, Félix; Reyes, Andrés
2018-02-01
The any particle molecular orbital grid-based Hartree-Fock approach (APMO-GBHF) is proposed as an initial step to perform multi-component post-Hartree-Fock, explicitly correlated, and density functional theory methods without basis set errors. The method has been applied to a number of electronic and multi-species molecular systems. Results of these calculations show that the APMO-GBHF total energies are comparable with those obtained at the APMO-HF complete basis set limit. In addition, results reveal a considerable improvement in the description of the nuclear cusps of electronic and non-electronic densities.
Does user-centred design affect the efficiency, usability and safety of CPOE order sets?
Chan, Julie; Shojania, Kaveh G; Easty, Anthony C; Etchells, Edward E
2011-05-01
Application of user-centred design principles to Computerized provider order entry (CPOE) systems may improve task efficiency, usability or safety, but there is limited evaluative research of its impact on CPOE systems. We evaluated the task efficiency, usability, and safety of three order set formats: our hospital's planned CPOE order sets (CPOE Test), computer order sets based on user-centred design principles (User Centred Design), and existing pre-printed paper order sets (Paper). 27 staff physicians, residents and medical students. Sunnybrook Health Sciences Centre, an academic hospital in Toronto, Canada. Methods Participants completed four simulated order set tasks with three order set formats (two CPOE Test tasks, one User Centred Design, and one Paper). Order of presentation of order set formats and tasks was randomized. Users received individual training for the CPOE Test format only. Completion time (efficiency), requests for assistance (usability), and errors in the submitted orders (safety). 27 study participants completed 108 order sets. Mean task times were: User Centred Design format 273 s, Paper format 293 s (p=0.73 compared to UCD format), and CPOE Test format 637 s (p<0.0001 compared to UCD format). Users requested assistance in 31% of the CPOE Test format tasks, whereas no assistance was needed for the other formats (p<0.01). There were no significant differences in number of errors between formats. The User Centred Design format was more efficient and usable than the CPOE Test format even though training was provided for the latter. We conclude that application of user-centred design principles can enhance task efficiency and usability, increasing the likelihood of successful implementation.
Working memory component processes: isolating BOLD signal changes.
Motes, Michael A; Rypma, Bart
2010-01-15
The chronology of the component processes subserving working memory (WM) and hemodynamic response lags has hindered the use of fMRI for exploring neural substrates of WM. In the present study, however, participants completed full trials that involved encoding two or six letters, maintaining the memory set over a delay, and then deciding whether a probe was in the memory set or not. Additionally, they completed encode-only, encode-and-maintain, and encode-and-decide partial trials intermixed with the full trials. The inclusion of partial trials allowed for the isolation of BOLD signal changes to the different trial periods. The results showed that only lateral and medial prefrontal cortex regions differentially responded to the 2- and 6-letter memory sets over the trial periods, showing greater activation to 6-letter sets during the encode and maintain trial periods. Thus, the data showed the differential involvement of PFC in the encoding and maintenance of supra- and sub-capacity memory sets and show the efficacy of using fMRI partial trial methods to study WM component processes.
Working Memory Component Processes: Isolating BOLD Signal-Changes
Motes, Michael A.; Rypma, Bart
2009-01-01
The chronology of the component processes subserving working memory (WM) and hemodynamic response lags have hindered the use of fMRI for exploring neural substrates of WM. In the present study, however, participants completed full trials that involved encoding two or six letters, maintaining the memory-set over a delay, and then deciding whether a probe was in the memory-set or not. Additionally, they completed encode only, encode and maintain, and encode and decide partial-trials intermixed with the full-trials. The inclusion of partial-trials allowed for the isolation of BOLD signal-changes to the different trial-periods. The results showed that only lateral and medial prefrontal cortex regions differentially responded to the 2- and 6-letter memory-sets over the trial-periods, showing greater activation to 6-letter sets during the encode and maintain trial-periods. Thus, the data showed the differential involvement of PFC in the encoding and maintenance of supra- and sub-capacity memory-sets and show the efficacy of using fMRI partial-trial methods to study WM component processes. PMID:19732840
Learning About Cockpit Automation: From Piston Trainer to Jet Transport
NASA Technical Reports Server (NTRS)
Casner, Stephen M.
2003-01-01
Two experiments explored the idea of providing cockpit automation training to airline-bound student pilots using cockpit automation equipment commonly found in small training airplanes. In a first experiment, pilots mastered a set of tasks and maneuvers using a GPS navigation computer, autopilot, and flight director system installed in a small training airplane Students were then tested on their ability to complete a similar set of tasks using the cockpit automation system found in a popular jet transport aircraft. Pilot were able to successfully complete 77% of all tasks in the jet transport on their first attempt. An analysis of a control group suggests that the pilot's success was attributable to the application of automation principles they had learned in the small airplane. A second experiment looked at two different ways of delivering small-aeroplane cockpit automation training: a self-study method, and a dual instruction method. The results showed a slight advantage for the self-study method. Overall, the results of the two studies cast a strong vote for the incorporation of cockpit automation training in curricula designed for pilot who will later transition to the jet fleet.
ERIC Educational Resources Information Center
Sousa, Fernando Cardoso; Monteiro, Ileana Pardal; Pellissier, René
2014-01-01
This article presents the development of a small-world network using an adapted version of the large-group problem-solving method "Future Search." Two management classes in a higher education setting were selected and required to plan a project. The students completed a survey focused on the frequency of communications before and after…
eulerAPE: Drawing Area-Proportional 3-Venn Diagrams Using Ellipses
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data. PMID:25032825
eulerAPE: drawing area-proportional 3-Venn diagrams using ellipses.
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data.
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.; Hawley, Anne; Livingstone, Katie; Mramba, Nona
2004-08-01
Confidence intervals are an important way to assess and estimate a parameter. In the case of biometric identification devices, several approaches to confidence intervals for an error rate have been proposed. Here we evaluate six of these methods. To complete this evaluation, we simulate data from a wide variety of parameter values. This data are simulated via a correlated binary distribution. We then determine how well these methods do at what they say they do: capturing the parameter inside the confidence interval. In addition, the average widths of the various confidence intervals are recorded for each set of parameters. The complete results of this simulation are presented graphically for easy comparison. We conclude by making a recommendation regarding which method performs best.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Wallack, A.S.; Canny, J.
1996-08-13
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 27 figs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Canny, John; Wallack, Aaron S.
1999-01-01
Methods and apparatus are provided for developing a complete set of all admissible Type I and Type II fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Wallack, Aaron S.; Canny, John
1996-01-01
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Canny, J.; Wallack, A.S.
1999-01-05
Methods and apparatus are provided for developing a complete set of all admissible Type 1 and Type 2 fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 44 figs.
Estimating population trends with a linear model
Bart, Jonathan; Collins, Brian D.; Morrison, R.I.G.
2003-01-01
We describe a simple and robust method for estimating trends in population size. The method may be used with Breeding Bird Survey data, aerial surveys, point counts, or any other program of repeated surveys at permanent locations. Surveys need not be made at each location during each survey period. The method differs from most existing methods in being design based, rather than model based. The only assumptions are that the nominal sampling plan is followed and that sample size is large enough for use of the t-distribution. Simulations based on two bird data sets from natural populations showed that the point estimate produced by the linear model was essentially unbiased even when counts varied substantially and 25% of the complete data set was missing. The estimating-equation approach, often used to analyze Breeding Bird Survey data, performed similarly on one data set but had substantial bias on the second data set, in which counts were highly variable. The advantages of the linear model are its simplicity, flexibility, and that it is self-weighting. A user-friendly computer program to carry out the calculations is available from the senior author.
Andrew D. Richardson; David Y. Hollinger
2007-01-01
Missing values in any data set create problems for researchers. The process by which missing values are replaced, and the data set is made complete, is generally referred to as imputation. Within the eddy flux community, the term "gap filling" is more commonly applied. A major challenge is that random errors in measured data result in uncertainty in the gap-...
Imposed Power of Breathing Associated With Use of an Impedance Threshold Device
2007-02-01
threshold device and a sham impedance threshold device. DESIGN: Prospective randomized blinded protocol. SETTING: University medical center. PATIENTS...for males). METHODS: The volunteers completed 2 trials of breathing through a face mask fitted with an active impedance threshold device set to open...at -7cmH 2 O pressure, or with a sham impedance threshold device, which was identical to the active device except that it did not contain an
MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E
2016-11-01
Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing cost-effectiveness data in a randomized controlled trial. Three incomplete data sets were generated from a complete reference data set with 17, 35 and 50 % missing data in effects and costs. The strategies evaluated included complete case analysis (CCA), multiple imputation with predictive mean matching (MI-PMM), MI-PMM on log-transformed costs (log MI-PMM), and a two-step MI. Mean cost and effect estimates, standard errors and incremental net benefits were compared with the results of the analyses on the complete reference data set. The CCA, MI-PMM, and the two-step MI strategy diverged from the results for the reference data set when the amount of missing data increased. In contrast, the estimates of the Log MI-PMM strategy remained stable irrespective of the amount of missing data. MI provided better estimates than CCA in all scenarios. With low amounts of missing data the MI strategies appeared equivalent but we recommend using the log MI-PMM with missing data greater than 35 %.
Assessing the evolutionary rate of positional orthologous genes in prokaryotes using synteny data
Lemoine, Frédéric; Lespinet, Olivier; Labedan, Bernard
2007-01-01
Background Comparison of completely sequenced microbial genomes has revealed how fluid these genomes are. Detecting synteny blocks requires reliable methods to determining the orthologs among the whole set of homologs detected by exhaustive comparisons between each pair of completely sequenced genomes. This is a complex and difficult problem in the field of comparative genomics but will help to better understand the way prokaryotic genomes are evolving. Results We have developed a suite of programs that automate three essential steps to study conservation of gene order, and validated them with a set of 107 bacteria and archaea that cover the majority of the prokaryotic taxonomic space. We identified the whole set of shared homologs between two or more species and computed the evolutionary distance separating each pair of homologs. We applied two strategies to extract from the set of homologs a collection of valid orthologs shared by at least two genomes. The first computes the Reciprocal Smallest Distance (RSD) using the PAM distances separating pairs of homologs. The second method groups homologs in families and reconstructs each family's evolutionary tree, distinguishing bona fide orthologs as well as paralogs created after the last speciation event. Although the phylogenetic tree method often succeeds where RSD fails, the reverse could occasionally be true. Accordingly, we used the data obtained with either methods or their intersection to number the orthologs that are adjacent in for each pair of genomes, the Positional Orthologous Genes (POGs), and to further study their properties. Once all these synteny blocks have been detected, we showed that POGs are subject to more evolutionary constraints than orthologs outside synteny groups, whichever the taxonomic distance separating the compared organisms. Conclusion The suite of programs described in this paper allows a reliable detection of orthologs and is useful for evaluating gene order conservation in prokaryotes whichever their taxonomic distance. Thus, our approach will make easy the rapid identification of POGS in the next few years as we are expecting to be inundated with thousands of completely sequenced microbial genomes. PMID:18047665
The rotate-plus-shift C-arm trajectory: complete CT data with limited angular rotation
NASA Astrophysics Data System (ADS)
Ritschl, Ludwig; Kuntz, Jan; Kachelrieß, Marc
2015-03-01
In the last decade C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm scan is performed using a circle-like trajectory around a region of interest. Therefor an angular range of at least 180° plus fan-angle must be covered to ensure a completely sampled data set. This fact defines some constraints on the geometry and technical specifications of a C-arm system, for example a larger C radius or a smaller C opening respectively. These technical modifications are usually not beneficial in terms of handling and usability of the C-arm during classical 2D applications like fluoroscopy. The method proposed in this paper relaxes the constraint of 180° plus fan-angle rotation to acquire a complete data set. The proposed C-arm trajectory requires a motorization of the orbital axis of the C and of ideally two orthogonal axis in the C plane. The trajectory consists of three parts: A rotation of the C around a defined iso-center and two translational movements parallel to the detector plane at the begin and at the end of the rotation. Combining these three parts to one trajectory enables for the acquisition of a completely sampled dataset using only 180° minus fan-angle of rotation. To evaluate the method we show animal and cadaver scans acquired with a mobile C-arm prototype. We expect that the transition of this method into clinical routine will lead to a much broader use of intraoperative 3D imaging in a wide field of clinical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, C.; Brizuela, H.; Heluani, S. P.
2014-05-21
The backscattering coefficient is a magnitude whose measurement is fundamental for the characterization of materials with techniques that make use of particle beams and particularly when performing microanalysis. In this work, we report the results of an analytic method to calculate the backscattering and absorption coefficients of electrons in similar conditions to those of electron probe microanalysis. Starting on a five level states ladder model in 3D, we deduced a set of integro-differential coupled equations of the coefficients with a method know as invariant embedding. By means of a procedure proposed by authors, called method of convergence, two types ofmore » approximate solutions for the set of equations, namely complete and simple solutions, can be obtained. Although the simple solutions were initially proposed as auxiliary forms to solve higher rank equations, they turned out to be also useful for the estimation of the aforementioned coefficients. In previous reports, we have presented results obtained with the complete solutions. In this paper, we present results obtained with the simple solutions of the coefficients, which exhibit a good degree of fit with the experimental data. Both the model and the calculation method presented here can be generalized to other techniques that make use of different sorts of particle beams.« less
Twomey, Michèle; Wallis, Lee A; Myers, Jonathan E
2014-07-01
To evaluate the construct of triage acuity as measured by the South African Triage Scale (SATS) against a set of reference vignettes. A modified Delphi method was used to develop a set of reference vignettes. Delphi participants completed a 2-round consensus-building process, and independently assigned triage acuity ratings to 100 written vignettes unaware of the ratings given by others. Triage acuity ratings were summarised for all vignettes, and only those that reached 80% consensus during round 2 were included in the reference set. Triage ratings for the reference vignettes given by two independent experts using the SATS were compared with the ratings given by the international Delphi panel. Measures of sensitivity, specificity, associated percentages for over-triage/under-triage were used to evaluate the construct of triage acuity (as measured by the SATS) by examining the association between the ratings by the two experts and the international panel. On completion of the Delphi process, 42 of the 100 vignettes reached 80% consensus on their acuity rating and made up the reference set. On average, over all acuity levels, sensitivity was 74% (CI 64% to 82%), specificity 92% (CI 87% to 94%), under-triage occurred 14% (CI 8% to 23%) and over-triage 12% (CI 8% to 23%) of the time. The results of this study provide an alternative to evaluating triage scales against the construct of acuity as measured with the SATS. This method of using 80% consensus vignettes may, however, systematically bias the validity estimate towards better performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy
2017-11-01
A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, P.
1986-01-01
Progress in the development of computational methods for the characterization of chemical reactions in aerobraking orbit transfer vehicle (AOTV) propulsive flows is reported. Two main areas of code development were undertaken: (1) the implementation of CASSCF (complete active space self-consistent field) and SCF (self-consistent field) analytical first derivatives on the CRAY X-MP; and (2) the installation of the complete set of electronic structure codes on the CRAY 2. In the area of application calculations the main effort was devoted to performing full configuration-interaction calculations and using these results to benchmark other methods. Preprints describing some of the systems studied are included.
Low-derivative operators of the Standard Model effective field theory via Hilbert series methods
NASA Astrophysics Data System (ADS)
Lehman, Landon; Martin, Adam
2016-02-01
In this work, we explore an extension of Hilbert series techniques to count operators that include derivatives. For sufficiently low-derivative operators, we conjecture an algorithm that gives the number of invariant operators, properly accounting for redundancies due to the equations of motion and integration by parts. Specifically, the conjectured technique can be applied whenever there is only one Lorentz invariant for a given partitioning of derivatives among the fields. At higher numbers of derivatives, equation of motion redundancies can be removed, but the increased number of Lorentz contractions spoils the subtraction of integration by parts redundancies. While restricted, this technique is sufficient to automatically recreate the complete set of invariant operators of the Standard Model effective field theory for dimensions 6 and 7 (for arbitrary numbers of flavors). At dimension 8, the algorithm does not automatically generate the complete operator set; however, it suffices for all but five classes of operators. For these remaining classes, there is a well defined procedure to manually determine the number of invariants. Assuming our method is correct, we derive a set of 535 dimension-8 N f = 1 operators.
Information fusion for diabetic retinopathy CAD in digital color fundus photographs.
Niemeijer, Meindert; Abramoff, Michael D; van Ginneken, Bram
2009-05-01
The purpose of computer-aided detection or diagnosis (CAD) technology has so far been to serve as a second reader. If, however, all relevant lesions in an image can be detected by CAD algorithms, use of CAD for automatic reading or prescreening may become feasible. This work addresses the question how to fuse information from multiple CAD algorithms, operating on multiple images that comprise an exam, to determine a likelihood that the exam is normal and would not require further inspection by human operators. We focus on retinal image screening for diabetic retinopathy, a common complication of diabetes. Current CAD systems are not designed to automatically evaluate complete exams consisting of multiple images for which several detection algorithm output sets are available. Information fusion will potentially play a crucial role in enabling the application of CAD technology to the automatic screening problem. Several different fusion methods are proposed and their effect on the performance of a complete comprehensive automatic diabetic retinopathy screening system is evaluated. Experiments show that the choice of fusion method can have a large impact on system performance. The complete system was evaluated on a set of 15,000 exams (60,000 images). The best performing fusion method obtained an area under the receiver operator characteristic curve of 0.881. This indicates that automated prescreening could be applied in diabetic retinopathy screening programs.
AXL.Net: Web-Enabled Case Method Instruction for Accelerating Tacit Knowledge Acquisition in Leaders
2006-11-01
individuals. Officers watched Tripwire as a group on a laptop computer and then independently completed a set of measures. After completing the measures...judgment posttest score (r = .33, p < .05). Positive affect was not, however, correlated with the behavioral judgment pretest score (r = .10, p = ns... Pretest .10 .05 Behavioral Judgment Posttest .33* .13 Emphasized Cultural Issues (T1) a .48** .21 Emphasized Cultural Issues (T2) a .29 .22 Note. * p
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Kun; Zhao Hongmei; Wang Caixia
Bromoiodomethane photodissociation in the low-lying excited states has been characterized using unrestricted Hartree-Fock, configuration-interaction-singles, and complete active space self-consistent field calculations with the SDB-aug-cc-pVTZ, aug-cc-pVTZ, and 3-21g** basis sets. According to the results of the vertical excited energies and oscillator strengths of these low-lying excited states, bond selectivity is predicted. Subsequently, the minimum energy paths of the first excited singlet state and the third excited state for the dissociation reactions were calculated using the complete active space self-consistent field method with 3-21g** basis set. Good agreement is found between the calculations and experimental data. The relationships of excitations, the electronicmore » structures at Franck-Condon points, and bond selectivity are discussed.« less
Is HO3 minimum cis or trans? An analytic full-dimensional ab initio isomerization path.
Varandas, A J C
2011-05-28
The minimum energy path for isomerization of HO(3) has been explored in detail using accurate high-level ab initio methods and techniques for extrapolation to the complete basis set limit. In agreement with other reports, the best estimates from both valence-only and all-electron single-reference methods here utilized predict the minimum of the cis-HO(3) isomer to be deeper than the trans-HO(3) one. They also show that the energy varies by less than 1 kcal mol(-1) or so over the full isomerization path. A similar result is found from valence-only multireference configuration interaction calculations with the size-extensive Davidson correction and a correlation consistent triple-zeta basis, which predict the energy difference between the two isomers to be of only Δ = -0.1 kcal mol(-1). However, single-point multireference calculations carried out at the optimum triple-zeta geometry with basis sets of the correlation consistent family but cardinal numbers up to X = 6 lead upon a dual-level extrapolation to the complete basis set limit of Δ = (0.12 ± 0.05) kcal mol(-1). In turn, extrapolations with the all-electron single-reference coupled-cluster method including the perturbative triples correction yield values of Δ = -0.19 and -0.03 kcal mol(-1) when done from triple-quadruple and quadruple-quintuple zeta pairs with two basis sets of increasing quality, namely cc-cpVXZ and aug-cc-pVXZ. Yet, if added a value of 0.25 kcal mol(-1) that accounts for the effect of triple and perturbative quadruple excitations with the VTZ basis set, one obtains a coupled cluster estimate of Δ = (0.14 ± 0.08) kcal mol(-1). It is then shown for the first time from systematic ab initio calculations that the trans-HO(3) isomer is more stable than the cis one, in agreement with the available experimental evidence. Inclusion of the best reported zero-point energy difference (0.382 kcal mol(-1)) from multireference configuration interaction calculations enhances further the relative stability to ΔE(ZPE) = (0.51 ± 0.08) kcal mol(-1). A scheme is also suggested to model the full-dimensional isomerization potential-energy surface using a quadratic expansion that is parametrically represented by a Fourier analysis in the torsion angle. The method illustrated at the raw and complete basis-set limit coupled-cluster levels can provide a valuable tool for a future analysis of the available (incomplete thus far) experimental rovibrational data. This journal is © the Owner Societies 2011
Drilled shaft bridge foundation design parameters and procedures for bearing in SGC soils.
DOT National Transportation Integrated Search
2006-04-01
This report provides a simplified method to be used for evaluating the skin friction and tip resistance of : axially loaded drilled shafts. A summary of literature and current practice was completed and then a : comprehensive set of field and laborat...
Interventions to improve delivery of isoniazid preventive therapy: an overview of systematic reviews
2014-01-01
Background Uptake of isoniazid preventive therapy (IPT) to prevent tuberculosis has been poor, particularly in the highest risk populations. Interventions to improve IPT delivery could promote implementation. The large number of existing systematic reviews on treatment adherence has made drawing conclusions a challenge. To provide decision makers with the evidence they need, we performed an overview of systematic reviews to compare different organizational interventions to improve IPT delivery as measured by treatment completion among those at highest risk for the development of TB disease, namely child contacts or HIV-infected individuals. Methods We searched the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects (DARE), and MEDLINE up to August 15, 2012. Two authors used a standardized data extraction form and the AMSTAR instrument to independently assess each review. Results Six reviews met inclusion criteria. Interventions included changes in the setting/site of IPT delivery, use of quality monitoring mechanisms (e.g., directly observed therapy), IPT delivery integration into other healthcare services, and use of lay health workers. Most reviews reported a combination of outcomes related to IPT adherence and treatment completion rate but without a baseline or comparison rate. Generally, we found limited evidence to demonstrate that the studied interventions improved treatment completion. Conclusions While most of the interventions were not shown to improve IPT completion, integration of tuberculosis and HIV services yielded high treatment completion rates in some settings. The lack of data from high burden TB settings limits applicability. Further research to assess different IPT delivery interventions, including those that address barriers to care in at-risk populations, is urgently needed to identify the most effective practices for IPT delivery and TB control in high TB burden settings. PMID:24886159
Accurate double many-body expansion potential energy surface for the 2(1)A' state of N2O.
Li, Jing; Varandas, António J C
2014-08-28
An accurate double many-body expansion potential energy surface is reported for the 2(1)A' state of N2O. The new double many-body expansion (DMBE) form has been fitted to a wealth of ab initio points that have been calculated at the multi-reference configuration interaction level using the full-valence-complete-active-space wave function as reference and the cc-pVQZ basis set, and subsequently corrected semiempirically via double many-body expansion-scaled external correlation method to extrapolate the calculated energies to the limit of a complete basis set and, most importantly, the limit of an infinite configuration interaction expansion. The topographical features of the novel potential energy surface are then examined in detail and compared with corresponding attributes of other potential functions available in the literature. Exploratory trajectories have also been run on this DMBE form with the quasiclassical trajectory method, with the thermal rate constant so determined at room temperature significantly enhancing agreement with experimental data.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
Construction and completion of flux balance models from pathway databases.
Latendresse, Mario; Krummenacker, Markus; Trupp, Miles; Karp, Peter D
2012-02-01
Flux balance analysis (FBA) is a well-known technique for genome-scale modeling of metabolic flux. Typically, an FBA formulation requires the accurate specification of four sets: biochemical reactions, biomass metabolites, nutrients and secreted metabolites. The development of FBA models can be time consuming and tedious because of the difficulty in assembling completely accurate descriptions of these sets, and in identifying errors in the composition of these sets. For example, the presence of a single non-producible metabolite in the biomass will make the entire model infeasible. Other difficulties in FBA modeling are that model distributions, and predicted fluxes, can be cryptic and difficult to understand. We present a multiple gap-filling method to accelerate the development of FBA models using a new tool, called MetaFlux, based on mixed integer linear programming (MILP). The method suggests corrections to the sets of reactions, biomass metabolites, nutrients and secretions. The method generates FBA models directly from Pathway/Genome Databases. Thus, FBA models developed in this framework are easily queried and visualized using the Pathway Tools software. Predicted fluxes are more easily comprehended by visualizing them on diagrams of individual metabolic pathways or of metabolic maps. MetaFlux can also remove redundant high-flux loops, solve FBA models once they are generated and model the effects of gene knockouts. MetaFlux has been validated through construction of FBA models for Escherichia coli and Homo sapiens. Pathway Tools with MetaFlux is freely available to academic users, and for a fee to commercial users. Download from: biocyc.org/download.shtml. mario.latendresse@sri.com Supplementary data are available at Bioinformatics online.
Formal methods for test case generation
NASA Technical Reports Server (NTRS)
Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)
2011-01-01
The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
The Complete, Temperature Resolved Spectrum of Methyl Cyanide Between 200 and 277 GHZ
NASA Astrophysics Data System (ADS)
McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.
2016-06-01
We have studied methyl cyanide, one of the so-called 'astronomical weeds', in the 200--277 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 231--351 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)
Dimethyl Ether Between 214.6 and 265.3 Ghz: the Complete, Temperature Resolved Spectrum
NASA Astrophysics Data System (ADS)
McMillan, James P.; Neese, Christopher F.; De Lucia, Frank C.
2017-06-01
We have studied dimethyl ether, one of the so-called 'astronomical weeds', in the 214.6-265.3 GHz band. We have experimentally gathered a set of intensity calibrated, complete, and temperature resolved spectra from across the temperature range of 238-391 K. Using our previously reported method of analysis, the point by point method, we are capable of generating the complete spectrum at astronomically significant temperatures. Many lines, of nontrivial intensity, which were previously not included in the available astrophysical catalogs have been found. Lower state energies and line strengths have been found for a number of lines which are not currently present in the catalogs. The extent to which this may be useful in making assignments will be discussed. J. McMillan, S. Fortman, C. Neese, F. DeLucia, ApJ. 795, 56 (2014)
2011-01-01
Background The 'Physical Activity Care Pathway' (a Pilot for the 'Let's Get Moving' policy) is a systematic approach to integrating physical activity promotion into the primary care setting. It combines several methods reported to support behavioural change, including brief interventions, motivational interviewing, goal setting, providing written resources, and follow-up support. This paper compares costs falling on the UK National Health Service (NHS) of implementing the care pathway using two different recruitment strategies and provides initial insights into the cost of changing physical activity behaviour. Methods A combination of a time driven variant of activity based costing, audit data through EMIS and a survey of practice managers provided patient-level cost data for 411 screened individuals. Self reported physical activity data of 70 people completing the care pathway at three month was compared with baseline using a regression based 'difference in differences' approach. Deterministic and probabilistic sensitivity analyses in combination with hypothesis testing were used to judge how robust findings are to key assumptions and to assess the uncertainty around estimates of the cost of changing physical activity behaviour. Results It cost £53 (SD 7.8) per patient completing the PACP in opportunistic centres and £191 (SD 39) at disease register sites. The completer rate was higher in disease register centres (27.3% vs. 16.2%) and the difference in differences in time spent on physical activity was 81.32 (SE 17.16) minutes/week in patients completing the PACP; so that the incremental cost of converting one sedentary adult to an 'active state' of 150 minutes of moderate intensity physical activity per week amounts to £ 886.50 in disease register practices, compared to opportunistic screening. Conclusions Disease register screening is more costly than opportunistic patient recruitment. However, additional costs come with a higher completion rate and better outcomes in terms of behavioural change in patients completing the care pathway. Further research is needed to rigorously evaluate intervention efficiency and to assess the link between behavioural change and changes in quality adjusted life years (QALYs). PMID:21605400
Zhang, Dingguo; Lin, Qiuling; Shi, Ruiyue; Wang, Lisheng; Yao, Jun; Tian, Yanhui
2018-06-18
This study aimed to evaluate the clinical efficacy, safety, and feasibility of performing endoscopic submucosal resection with a ligation device (ESMR-L) after apical mucosal incision (AMI) for the treatment of gastric subepithelial tumors originating from the muscularis propria (SET-MPs). 14 patients with gastric SET-MPs were treated by ESMR-L with AMI between December 2016 and May 2017. The complete resection rate, operation duration, and postoperative complications were collected. All patients were followed for 2 - 6 months. The complete resection rate was 100 %, the mean tumor size was 10.71 ± 3.45 mm (7 - 18 mm), and the median operative time was 18.5 minutes. Perforation occurred in four patients, with all lesions being completely repaired endoscopically. No delayed bleeding or peritoneal signs were observed. No residual lesions or recurrence were found during the follow-up period. AMI with ESMR-L appears to be an efficient and simple method for the histological diagnosis of gastric SET-MPs, but it carries a high perforation rate and cannot guarantee cure. © Georg Thieme Verlag KG Stuttgart · New York.
Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
SIEBER, CHRISTIAN
Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less
Zhang, Yong-de; Jiang, Jin-gang; Liang, Ting; Hu, Wei-ping
2011-12-01
Artificial teeth are very complicated in shape, and not easy to be grasped and manipulated accurately by a single robot. The method of tooth-arrangement by multi-manipulator for complete denture manufacturing proposed in this paper. A novel complete denture manufacturing mechanism is designed based on multi-manipulator and dental arch generator. Kinematics model of the multi-manipulator tooth-arrangement robot is built by analytical method based on tooth-arrangement principle for full denture. Preliminary experiments on tooth-arrangement are performed using the multi-manipulator tooth-arrangement robot prototype system. The multi-manipulator tooth-arrangement robot prototype system can automatically design and manufacture a set of complete denture that is suitable for a patient according to the jaw arch parameters. The experimental results verified the validity of kinematics model of the multi-manipulator tooth-arrangement robot and the feasibility of the manufacture strategy of complete denture fulfilled by multi-manipulator tooth-arrangement robot.
Andrusyszyn, M A; Cragg, C E; Humbert, J
2001-04-01
The relationships among multiple distance delivery methods, preferred learning style, content, and achievement was sought for primary care nurse practitioner students. A researcher-designed questionnaire was completed by 86 (71%) participants, while 6 engaged in follow-up interviews. The results of the study included: participants preferred learning by "considering the big picture"; "setting own learning plans"; and "focusing on concrete examples." Several positive associations were found: learning on own with learning by reading, and setting own learning plans; small group with learning through discussion; large group with learning new things through hearing and with having learning plans set by others. The most preferred method was print-based material and the least preferred method was audio tape. The most suited method for content included video teleconferencing for counseling, political action, and transcultural issues; and video tape for physical assessment. Convenience, self-direction, and timing of learning were more important than delivery method or learning style. Preferred order of learning was reading, discussing, observing, doing, and reflecting. Recommended considerations when designing distance courses include a mix of delivery methods, specific content, outcomes, learner characteristics, and state of technology.
[Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].
Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang
2013-10-01
In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.
[Application of Stata software to test heterogeneity in meta-analysis method].
Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong
2008-07-01
To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.
DFT simulations and vibrational spectra of 2-amino-2-methyl-1,3-propanediol
NASA Astrophysics Data System (ADS)
Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.
2014-12-01
The FTIR and FT-Raman spectra of 2-amino-2-methyl-1,3-propanediol were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and density functional method (B3LYP) with the augmented-correlation consistent-polarized valence double zeta (aug-cc-pVDZ) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed on the basis of the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Mulliken charges were calculated using both Hartee-Fock and density functional method using the aug-cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge-Independent Atomic Orbital (GIAO) method and were compared with experimental results.
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Walch, Stephen P.
2002-01-01
As part of NASA Ames Research Center's Integrated Process Team on Device/Process Modeling and Nanotechnology our goal is to create/contribute to a gas-phase chemical database for use in modeling microelectronics devices. In particular, we use ab initio methods to determine chemical reaction pathways and to evaluate reaction rate coefficients. Our initial studies concern reactions involved in the dichlorosilane-hydrogen (SiCl2H2--H2) and trichlorosilane-hydrogen (SiCl2H-H2) systems. Reactant, saddle point (transition state), and product geometries and their vibrational harmonic frequencies are determined using the complete-active-space self-consistent-field (CASSCF) electronic structure method with the correlation consistent polarized valence double-zeta basis set (cc-pVDZ). Reaction pathways are constructed by following the imaginary frequency mode of the saddle point to both the reactant and product. Accurate energetics are determined using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations (CCSD(T)) extrapolated to the complete basis set limit. Using the data from the electronic structure calculations, reaction rate coefficients are obtained using conventional and variational transition state and RRKM theories.
Technical Development and Application of Soft Computing in Agricultural and Biological Engineering
USDA-ARS?s Scientific Manuscript database
Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...
Development of Soft Computing and Applications in Agricultural and Biological Engineering
USDA-ARS?s Scientific Manuscript database
Soft computing is a set of “inexact” computing techniques, which are able to model and analyze very complex problems. For these complex problems, more conventional methods have not been able to produce cost-effective, analytical, or complete solutions. Soft computing has been extensively studied and...
CPR: A Model for Effective Goal Setting.
ERIC Educational Resources Information Center
Bey, Theresa M.
The Complete Procedural Record (CPR) method provides an opportunity for the student teacher to: (1) review theories, practices, and experiences he or she encounters in teacher preparation courses; (2) rethink the various responsibilities and tasks one will have to assume during the practice teaching experience; (3) identify the concerns one has…
42 CFR 37.52 - Method of obtaining definitive interpretations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... other diseases must be demonstrated by those physicians who desire to be B Readers by taking and passing... specified by NIOSH. Each physician who desires to take the digital version of the examination will be provided a complete set of the current NIOSH-approved standard reference digital radiographs. Physicians...
Analysis of co-evolving genes in campylobacter jejuni and C. coli
USDA-ARS?s Scientific Manuscript database
Background: The population structure of Campylobacter has been frequently studied by MLST, for which fragments of housekeeping genes are compared. We wished to determine if the used MLST genes are representative of the complete genome. Methods: A set of 1029 core gene families (CGF) was identifie...
Examining Differences between Light and Heavier Smoking Vocational Students: A Pilot Study
ERIC Educational Resources Information Center
de Araujo, Vanessa A.; Loukas, Alexandra; Gottlieb, Nell H.
2011-01-01
Objective: To examine differences between light and heavier smoking vocational/technical students in tobacco use, related behaviors, and cessation. Design: Cross-sectional. Setting and Methods: Two hundred and four smokers attending two vocational/technical colleges in east Texas, USA, completed an anonymous survey during a regularly scheduled…
Histories of Child Maltreatment and Psychiatric Disorder in Pregnant Adolescents
ERIC Educational Resources Information Center
Romano, Elisa; Zoccolillo, Mark; Paquette, Daniel
2006-01-01
Objective: The study investigated histories of child maltreatment and psychiatric disorder in a high-risk sample of pregnant adolescents. Method: Cross-sectional data were obtained for 252 pregnant adolescents from high school, hospital, and group home settings in Montreal (Canada). Adolescents completed a child maltreatment questionnaire and a…
ERIC Educational Resources Information Center
Heckman, Carolyn J.; Dykstra, Jennifer L.; Collins, Bradley N.
2011-01-01
Objective: To examine substance-related attitudes and behaviours among college students across an academic semester. Design: Pre-post quasi-experimental survey design. Setting: A large University in the Midwestern United States. Method: Surveys were completed by 299 undergraduates enrolled in three courses: drugs and behaviour, abnormal…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sjöstrand, Torbjörn; Ask, Stefan; Christiansen, Jesper R.
The Pythia program is a standard tool for the generation of events in high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multiparticle final state. It contains a library of hard processes, models for initial- and final-state parton showers, matching and merging methods between hard processes and parton showers, multiparton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and several interfaces to external programs. Pythia 8.2 is the second main release after the complete rewrite from Fortran to C++, and now hasmore » reached such a maturity that it offers a complete replacement for most applications, notably for LHC physics studies. Lastly, the many new features should allow an improved description of data.« less
A theoretical study of bond selective photochemistry in CH2BrI
NASA Astrophysics Data System (ADS)
Liu, Kun; Zhao, Hongmei; Wang, Caixia; Zhang, Aihua; Ma, Siyu; Li, Zonghe
2005-01-01
Bromoiodomethane photodissociation in the low-lying excited states has been characterized using unrestricted Hartree-Fock, configuration-interaction-singles, and complete active space self-consistent field calculations with the SDB-aug-cc-pVTZ, aug-cc-pVTZ, and 3-21g** basis sets. According to the results of the vertical excited energies and oscillator strengths of these low-lying excited states, bond selectivity is predicted. Subsequently, the minimum energy paths of the first excited singlet state and the third excited state for the dissociation reactions were calculated using the complete active space self-consistent field method with 3-21g** basis set. Good agreement is found between the calculations and experimental data. The relationships of excitations, the electronic structures at Franck-Condon points, and bond selectivity are discussed.
Algorta, Guillermo Perez; Youngstrom, Eric A.; Phelps, James; Jenkins, Melissa M.; Kogos, Jennifer L.; Findling, Robert L.
2013-01-01
Family history of mental illness provides important information when evaluating pediatric bipolar disorder (PBD). However, such information is often challenging to gather within clinical settings. This study investigates the feasibility and utility of gathering family history information using an inexpensive method practical for outpatient settings. Families (N=273) completed family history, rating scales, MINI and KSADS interviews about youths 5–18 (median=11) years presenting to an outpatient clinic. Primary caregivers completed a half page Family Index of Risk for Mood issues (FIRM). All families completed the FIRM quickly and easily. Most (78%) reported 1+ relatives having history of mood or substance issues, M=3.7 (SD=3.3). A simple sum of familial mood issues discriminated cases with PBD from all other cases, AUROC=.63, p=.006. FIRM scores were specific to youth mood disorder and not ADHD or disruptive behavior disorder. FIRM scores significantly improved the detection of PBD even controlling for rating scales. No subset of family risk items performed better than the total. Family history information showed clinically meaningful discrimination of PBD. Two different approaches to clinical interpretation showed validity in these clinically realistic data. Inexpensive and clinically practical methods of gathering family history can help to improve the detection of PBD. PMID:22800090
An ab initio benchmark study of the H + CO --> HCO reaction
NASA Technical Reports Server (NTRS)
Woon, D. E.
1996-01-01
The H + CO --> HCO reaction has been characterized with correlation consistent basis sets at five levels of theory in order to benchmark the sensitivities of the barrier height and reaction ergicity to the one-electron and n-electron expansions of the electronic wave function. Single and multireference methods are compared and contrasted. The coupled cluster method RCCSD(T) was found to be in very good agreement with Davidson-corrected internally-contracted multireference configuration interaction (MRCI+Q). Second-order Moller-Plesset perturbation theory (MP2) was also employed. The estimated complete basis set (CBS) limits for the barrier height (in kcal/mol) for the five methods, including harmonic zero-point energy corrections, are MP2, 4.66; RCCSD, 4.78; RCCSD(T), 4.15; MRCI, 5.10; and MRCI+Q, 4.07. Similarly, the estimated CBS limits for the ergicity of the reaction are: MP2, -17.99; RCCSD, -13.34; RCCSD(T), -13.79; MRCI, -11.46; and MRCI+Q, -13.70. Additional basis set explorations for the RCCSD(T) method demonstrate that aug-cc-pVTZ sets, even with some functions removed, are sufficient to reproduce the CBS limits to within 0.1-0.3 kcal/mol.
Control of Supercavitation Flow and Stability of Supercavitating Motion of Bodies
2001-02-01
sign opposite to a sign of angle Vf - accidental deflection of the model Sgn M = -Sgn i. 4.3. EQUATIONS OF THE SCM DYNAMICS The most effective method of...the motion stability in interactive regime "researcher - computer" [ 16]. The complete mathematical model of the SCM motion includes a set of equations ...of solid body dynamics, equations to calculate the unsteady cavity shape and relations to calculate the acting forces. A set of dynamic equations of
Point-Process Models of Social Network Interactions: Parameter Estimation and Missing Data Recovery
2014-08-01
treating them as zero will have a de minimis impact on the results, but avoiding computing them (and computing with them) saves tremendous time. Set a... test the methods on simulated time series on artificial social networks, including some toy networks and some meant to resemble IkeNet. We conclude...the section by discussing the results in detail. In each of our tests we begin with a complete data set, whether it is real (IkeNet) or simulated. Then
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr
NASA Astrophysics Data System (ADS)
Feng, Rulin; Peterson, Kirk A.
2017-08-01
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP0 for PuO2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal/mol. The ground state of PuO2 is predicted to be the 0 g +5Σ state.
Correlation consistent basis sets for actinides. II. The atoms Ac and Np-Lr.
Feng, Rulin; Peterson, Kirk A
2017-08-28
New correlation consistent basis sets optimized using the all-electron third-order Douglas-Kroll-Hess (DKH3) scalar relativistic Hamiltonian are reported for the actinide elements Ac and Np through Lr. These complete the series of sets reported previously for Th-U [K. A. Peterson, J. Chem. Phys. 142, 074105 (2015); M. Vasiliu et al., J. Phys. Chem. A 119, 11422 (2015)]. The new sets range in size from double- to quadruple-zeta and encompass both those optimized for valence (6s6p5f7s6d) and outer-core electron correlations (valence + 5s5p5d). The final sets have been contracted for both the DKH3 and eXact 2-component (X2C) Hamiltonians, yielding cc-pVnZ-DK3/cc-pVnZ-X2C sets for valence correlation and cc-pwCVnZ-DK3/cc-pwCVnZ-X2C sets for outer-core correlation (n = D, T, Q in each case). In order to test the effectiveness of the new basis sets, both atomic and molecular benchmark calculations have been carried out. In the first case, the first three atomic ionization potentials (IPs) of all the actinide elements Ac-Lr have been calculated using the Feller-Peterson-Dixon (FPD) composite approach, primarily with the multireference configuration interaction (MRCI) method. Excellent convergence towards the respective complete basis set (CBS) limits is achieved with the new sets, leading to good agreement with experiment, where these exist, after accurately accounting for spin-orbit effects using the 4-component Dirac-Hartree-Fock method. For a molecular test, the IP and atomization energy (AE) of PuO 2 have been calculated also using the FPD method but using a coupled cluster approach with spin-orbit coupling accounted for using the 4-component MRCI. The present calculations yield an IP 0 for PuO 2 of 159.8 kcal/mol, which is in excellent agreement with the experimental electron transfer bracketing value of 162 ± 3 kcal/mol. Likewise, the calculated 0 K AE of 305.6 kcal/mol is in very good agreement with the currently accepted experimental value of 303.1 ± 5 kcal/mol. The ground state of PuO 2 is predicted to be the Σ0g+5 state.
Riggs, Karin R; Lozano, Paula; Mohelnitzky, Amy; Rudnick, Sarah; Richards, Julie
2014-01-01
Objective: To assess the feasibility and acceptability of family-based group pediatric obesity treatment in a primary care setting, to obtain an estimate of its effectiveness, and to describe participating parents’ experiences of social support for healthy lifestyle changes. Methods: We adapted an evidence-based intervention to a group format and completed six 12- to 16-week groups over 3 years. We assessed program attendance and completion, changes in child and parent body mass index (BMI; calculated as weight in kilograms divided by height in meters squared), and changes in child quality of life in a single-arm before-and-after trial. Qualitative interviews explored social support for implementing healthy lifestyle changes. Results: Thirty-eight parent-child pairs enrolled (28% of the 134 pairs invited). Of those, 24 (63%) completed the program and another 6 (16%) attended at least 4 sessions but did not complete the program. Children who completed the program achieved a mean change in BMI Z-scores (Z-BMI) of −0.1 (0.1) (p < 0.001) and significant improvement in parent-reported child quality of life (mean change = 8.5; p = 0.002). Mean BMI of parents changed by −0.9 (p = 0.003). Parents reported receiving a wide range of social support for healthy lifestyle changes and placed importance on the absence or presence of support. Conclusions: A pilot group program for family-based treatment of pediatric obesity is feasible and acceptable in a primary care setting. Change in child and parent BMI outcomes and child quality of life among completers were promising despite the pilot’s low intensity. Parent experiences with lack of social support suggest possible ways to improve retention and adherence. PMID:24937148
NASA Technical Reports Server (NTRS)
Vonderhaar, Thomas H.; Randel, David L.; Reinke, Donald L.; Stephens, Graeme L.; Ringerud, Mark A.; Combs, Cynthia L.; Greenwald, Thomas J.; Wittmeyer, Ian L.
1995-01-01
There is a well-documented requirement for a comprehensive and accurate global moisture data set to assist many important studies in atmospheric science. Currently, atmospheric water vapor measurements are made from a variety of sources including radiosondes, aircraft and surface observations, and in recent years, by various satellite instruments. Creating a global data set from a single measuring system produces results that are useful and accurate only in specific situations and/or areas. Therefore, an accurate global moisture data set has been derived from a combination of these measurement systems. Under a NASA peer-reviewed contract, STC-METSAT produced two 5-yr (1988-1992) global data sets. One is the total column (integrated) water vapor data set and the other, a global layered water vapor data set using a combination of radiosonde observations, Television and Infrared Observation Satellite (TIROS) Operational Satellite (TOVS), and Special Sensor Microwave/Imager (SSM/I) data sets. STC-METSAT also produced a companion, global, integrated liquid water data set. The complete data set (all three products) has been named NVAP, an anachronym for NASA Water Vapor Project. STC-METSAT developed methods to process the data at a daily time scale and 1 x 1 deg spatial resolution.
Shaye, David A; Tollefson, Travis; Shah, Irfan; Krishnan, Gopal; Matic, Damir; Figari, Marcelo; Lim, Thiam Chye; Aniruth, Sunil; Schubert, Warren
2018-06-06
Trauma is a significant contributor to global disease, and low-income countries disproportionately shoulder this burden. Education and training are critical components in the effort to address the surgical workforce shortage. Educators can tailor training to a diverse background of health professionals in low-resource settings using competency-based curricula. We present a process for the development of a competency-based curriculum for low-resource settings in the context of craniomaxillofacial (CMF) trauma education. CMF trauma surgeons representing 7 low-, middle-, and high-income countries conducted a standardized educational curriculum development program. Patient problems related to facial injuries were identified and ranked from highest to lowest morbidity. Higher morbidity problems were categorized into 4 modules with agreed upon competencies. Methods of delivery (lectures, case discussions, and practical exercises) were selected to optimize learning of each competency. A facial injuries educational curriculum (1.5 days event) was tailored to health professionals with diverse training backgrounds who care for CMF trauma patients in low-resource settings. A backward planned, competency-based curriculum was organized into four modules titled: acute (emergent), eye (periorbital injuries and sight preserving measures), mouth (dental injuries and fracture care), and soft tissue injury treatments. Four courses have been completed with pre- and post-course assessments completed. Surgeons and educators from a diverse geographic background found the backward planning curriculum development method effective in creating a competency-based facial injuries (trauma) course for health professionals in low-resource settings, where contextual aspects of shortages of surgical capacity, equipment, and emergency transportation must be considered.
NASA Astrophysics Data System (ADS)
Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho
2017-10-01
This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.
Pasquier, C; Promponas, V J; Hamodrakas, S J
2001-08-15
A cascading system of hierarchical, artificial neural networks (named PRED-CLASS) is presented for the generalized classification of proteins into four distinct classes-transmembrane, fibrous, globular, and mixed-from information solely encoded in their amino acid sequences. The architecture of the individual component networks is kept very simple, reducing the number of free parameters (network synaptic weights) for faster training, improved generalization, and the avoidance of data overfitting. Capturing information from as few as 50 protein sequences spread among the four target classes (6 transmembrane, 10 fibrous, 13 globular, and 17 mixed), PRED-CLASS was able to obtain 371 correct predictions out of a set of 387 proteins (success rate approximately 96%) unambiguously assigned into one of the target classes. The application of PRED-CLASS to several test sets and complete proteomes of several organisms demonstrates that such a method could serve as a valuable tool in the annotation of genomic open reading frames with no functional assignment or as a preliminary step in fold recognition and ab initio structure prediction methods. Detailed results obtained for various data sets and completed genomes, along with a web sever running the PRED-CLASS algorithm, can be accessed over the World Wide Web at http://o2.biol.uoa.gr/PRED-CLASS.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrie, Michael; Shadwick, B. A.
2016-01-04
Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org
2016-01-15
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less
Bandyopadhyay, Sanghamitra; Mitra, Ramkrishna
2009-10-15
Prediction of microRNA (miRNA) target mRNAs using machine learning approaches is an important area of research. However, most of the methods suffer from either high false positive or false negative rates. One reason for this is the marked deficiency of negative examples or miRNA non-target pairs. Systematic identification of non-target mRNAs is still not addressed properly, and therefore, current machine learning approaches are compelled to rely on artificially generated negative examples for training. In this article, we have identified approximately 300 tissue-specific negative examples using a novel approach that involves expression profiling of both miRNAs and mRNAs, miRNA-mRNA structural interactions and seed-site conservation. The newly generated negative examples are validated with pSILAC dataset, which elucidate the fact that the identified non-targets are indeed non-targets.These high-throughput tissue-specific negative examples and a set of experimentally verified positive examples are then used to build a system called TargetMiner, a support vector machine (SVM)-based classifier. In addition to assessing the prediction accuracy on cross-validation experiments, TargetMiner has been validated with a completely independent experimental test dataset. Our method outperforms 10 existing target prediction algorithms and provides a good balance between sensitivity and specificity that is not reflected in the existing methods. We achieve a significantly higher sensitivity and specificity of 69% and 67.8% based on a pool of 90 feature set and 76.5% and 66.1% using a set of 30 selected feature set on the completely independent test dataset. In order to establish the effectiveness of the systematically generated negative examples, the SVM is trained using a different set of negative data generated using the method in Yousef et al. A significantly higher false positive rate (70.6%) is observed when tested on the independent set, while all other factors are kept the same. Again, when an existing method (NBmiRTar) is executed with the our proposed negative data, we observe an improvement in its performance. These clearly establish the effectiveness of the proposed approach of selecting the negative examples systematically. TargetMiner is now available as an online tool at www.isical.ac.in/ approximately bioinfo_miu
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
WE-EF-207-02: The Rotate-Plus-Shift C-Arm Trajectory: Theory and First Clinical Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritschl, L; Kachelriess, M; Kuntz, J
Purpose: The proposed method enables the acquisition of a complete dataset for 3D reconstruction of C-Arm data using less than 180° rotation. Methods: Typically a C–arm cone–beam CT scan is performed using a circle–like trajectory around a region of interest. Therefore an angular range of at least 180° plus fan–angle must be covered to ensure a completely sampled data set. This fact defines some constraints on the geometry and technical specifications of a C–arm system, for example a larger C radius or a smaller C opening respectively. This is even more important for mobile C-arm devices which are typically usedmore » in surgical applications.To overcome these limitations we propose a new trajectory which requires only 180° minusfan–angle of rotation for a complete data set. The trajectory consists of three parts: A rotation of the C around a defined iso–center and two translational movements parallel to the detector plane at the begin and at the end of the rotation (rotate plus shift trajectory). This enables the acquisition of a completely sampled dataset using only 180° minus fan–angle of rotation. Results: For the evaluation of the method we show simulated and measured data. The results show, that the rotate plus shift scan yields equivalent image quality compared to the short scan which is assumed to be the gold standard for C-arm CT today. Compared to the pure rotational scan over only 165°, the rotate plus shift scan shows strong improvements in image quality. Conclusion: The proposed method makes 3D imaging using C–arms with less than 180° rotation range possible. This enables integrating full 3D functionality into a C- arm device without any loss of handling and usability for 2D imaging.« less
A correlated ab initio study of the A2 pi <-- X2 sigma+ transition in MgCCH
NASA Technical Reports Server (NTRS)
Woon, D. E.
1997-01-01
The A2 pi <-- X2 sigma+ transition in MgCCH was studied with correlation consistent basis sets and single- and multireference correlation methods. The A2 pi excited state was characterized in detail; the x2 sigma+ ground state has been described elsewhere recently. The estimated complete basis set (CBS) limits for valence correlation, including zero-point energy corrections, are 22668, 23191, and 22795 for the RCCSD(T), MRCI, and MRCI + Q methods, respectively. A core-valence correction of +162 cm-1 shifts the RCCSD(T) value to 22830 cm-1, in good agreement with the experimental result of 22807 cm-1.
2012-01-01
Background While research on the impact of global climate change (GCC) on ecosystems and species is flourishing, a fundamental component of biodiversity – molecular variation – has not yet received its due attention in such studies. Here we present a methodological framework for projecting the loss of intraspecific genetic diversity due to GCC. Methods The framework consists of multiple steps that combines 1) hierarchical genetic clustering methods to define comparable units of inference, 2) species accumulation curves (SAC) to infer sampling completeness, and 3) species distribution modelling (SDM) to project the genetic diversity loss under GCC. We suggest procedures for existing data sets as well as specifically designed studies. We illustrate the approach with two worked examples from a land snail (Trochulus villosus) and a caddisfly (Smicridea (S.) mucronata). Results Sampling completeness was diagnosed on the third coarsest haplotype clade level for T. villosus and the second coarsest for S. mucronata. For both species, a substantial species range loss was projected under the chosen climate scenario. However, despite substantial differences in data set quality concerning spatial sampling and sampling depth, no loss of haplotype clades due to GCC was predicted for either species. Conclusions The suggested approach presents a feasible method to tap the rich resources of existing phylogeographic data sets and guide the design and analysis of studies explicitly designed to estimate the impact of GCC on a currently still neglected level of biodiversity. PMID:23176586
7 CFR 1.670 - How must documents be filed and served under §§ 1.670 through 1.673?
Code of Federal Regulations, 2014 CFR
2014-01-01
... complete copy of the document must be served on each license party and FERC, using: (i) One of the methods..., documents must be filed using one of the methods set forth in § 1.612(b). (2) A document is considered filed on the date it is received. However, any document received after 5 p.m. at the place where the filing...
Improving Closing Task Completion in a Drugstore
ERIC Educational Resources Information Center
Fante, Rhiannon; Davis, Ora L.; Kempt, Vivian
2013-01-01
A within-subject ABAB reversal design was utilized to investigate the effects of graphic feedback and goal setting on employee closing task completion. Goal setting was contingent upon baseline performance and graphic feedback was posted weekly. It was found that goal setting and graphic feedback improved employee closing task completion.…
Node degree distribution in spanning trees
NASA Astrophysics Data System (ADS)
Pozrikidis, C.
2016-03-01
A method is presented for computing the number of spanning trees involving one link or a specified group of links, and excluding another link or a specified group of links, in a network described by a simple graph in terms of derivatives of the spanning-tree generating function defined with respect to the eigenvalues of the Kirchhoff (weighted Laplacian) matrix. The method is applied to deduce the node degree distribution in a complete or randomized set of spanning trees of an arbitrary network. An important feature of the proposed method is that the explicit construction of spanning trees is not required. It is shown that the node degree distribution in the spanning trees of the complete network is described by the binomial distribution. Numerical results are presented for the node degree distribution in square, triangular, and honeycomb lattices.
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms (GA) in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
Full design of fuzzy controllers using genetic algorithms
NASA Technical Reports Server (NTRS)
Homaifar, Abdollah; Mccormick, ED
1992-01-01
This paper examines the applicability of genetic algorithms in the complete design of fuzzy logic controllers. While GA has been used before in the development of rule sets or high performance membership functions, the interdependence between these two components dictates that they should be designed together simultaneously. GA is fully capable of creating complete fuzzy controllers given the equations of motion of the system, eliminating the need for human input in the design loop. We show the application of this new method to the development of a cart controller.
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots
Gilbert, Hunter B.; Webster, Robert J.
2016-01-01
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.
Gilbert, Hunter B; Webster, Robert J
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.
The Affects of Not Reading: Hating Characters, Being Bored, Feeling Stupid
ERIC Educational Resources Information Center
Poletti, Anna; Seaboyer, Judith; Kennedy, Rosanne; Barnett, Tully; Douglas, Kate
2016-01-01
This article brings recent debates in literary studies regarding the practice of close reading into conversation with Derek Attridge's idea of "readerly hospitality" (2004) to diagnose the problem of students in undergraduate literary studies programme not completing set reading. We argue that the method of close reading depends on…
Assessing Collaborative Learning: Big Data, Analytics and University Futures
ERIC Educational Resources Information Center
Williams, Peter
2017-01-01
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent…
Accountability Groups to Enhance Language Learning in a University Intensive English Program
ERIC Educational Resources Information Center
Lippincott, Dianna
2017-01-01
This mixed methods classroom research examined if accountability groups in the lower proficiency levels of a university intensive English program would improve students' language acquisition. Students were assigned partners for the study period with whom they completed assignments inside and outside of class, as well as set goals for use of…
ERIC Educational Resources Information Center
Whitt, Ahmed; Howard, Matthew O.
2012-01-01
Objectives: The Brief Symptom Inventory (BSI) is widely used in juvenile justice settings; however, little is known regarding its factor structure in antisocial youth. The authors evaluated the BSI factor structure in a state residential treatment population. Methods: 707 adolescents completed the BSI. Exploratory and confirmatory factor analyses…
A History of Instructional Methods in Uncontracted and Contracted Braille
ERIC Educational Resources Information Center
D'Andrea, Frances Mary
2009-01-01
This literature review outlines the history of the braille code as used in the United States and Canada, illustrating how both the code itself and instructional strategies for teaching it changed over time. The review sets the stage for the research questions of the recently completed Alphabetic Braille and Contracted Braille Study.
ERIC Educational Resources Information Center
Warren, Jared S.; Nelson, Philip L.; Mondragon, Sasha A.; Baldwin, Scott A.; Burlingame, Gary M.
2010-01-01
Objective: The authors compared symptom change trajectories and treatment outcome categories in children and adolescents receiving routine outpatient mental health services in a public community mental health system and a private managed care organization. Method: Archival longitudinal outcome data from parents completing the Youth Outcome…
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Weizhou, E-mail: wzw@lynu.edu.cn, E-mail: ybw@gzu.edu.cn; Zhang, Yu; Sun, Tao
High-level coupled cluster singles, doubles, and perturbative triples [CCSD(T)] computations with up to the aug-cc-pVQZ basis set (1924 basis functions) and various extrapolations toward the complete basis set (CBS) limit are presented for the sandwich, T-shaped, and parallel-displaced benzene⋯naphthalene complex. Using the CCSD(T)/CBS interaction energies as a benchmark, the performance of some newly developed wave function and density functional theory methods has been evaluated. The best performing methods were found to be the dispersion-corrected PBE0 functional (PBE0-D3) and spin-component scaled zeroth-order symmetry-adapted perturbation theory (SCS-SAPT0). The success of SCS-SAPT0 is very encouraging because it provides one method for energy componentmore » analysis of π-stacked complexes with 200 atoms or more. Most newly developed methods do, however, overestimate the interaction energies. The results of energy component analysis show that interaction energies are overestimated mainly due to the overestimation of dispersion energy.« less
Electronic and spectroscopic characterizations of SNP isomers
NASA Astrophysics Data System (ADS)
Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.
2018-02-01
High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.
Sjöstrand, Torbjörn; Ask, Stefan; Christiansen, Jesper R.; ...
2015-02-11
The Pythia program is a standard tool for the generation of events in high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multiparticle final state. It contains a library of hard processes, models for initial- and final-state parton showers, matching and merging methods between hard processes and parton showers, multiparton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and several interfaces to external programs. Pythia 8.2 is the second main release after the complete rewrite from Fortran to C++, and now hasmore » reached such a maturity that it offers a complete replacement for most applications, notably for LHC physics studies. Lastly, the many new features should allow an improved description of data.« less
Proof of a Dain inequality with charge
NASA Astrophysics Data System (ADS)
Lopes Costa, João
2010-07-01
We prove an upper bound for angular momentum and charge in terms of the mass for electro-vacuum asymptotically flat axisymmetric initial data sets with simply connected orbit space. This completes the work started in (Chruściel and Costa 2009 Class. Quantum Grav. 26 235013 (arXiv:gr-qc/0909.5625)) where this charged Dain inequality was first presented but where the proof of the main result, based on the methods of Chruściel et al (Ann. Phys. 2008 323 2591-613 (arXiv:gr-qc/0712.4064v2)), was only sketched. Here we present a complete proof while simplifying the methods suggested by Chruściel and Costa (2009 Class. Quantum Grav. 26 235013 (arXiv:gr-qc/0909.5625)).
Kriging for Spatial-Temporal Data on the Bridges Supercomputer
NASA Astrophysics Data System (ADS)
Hodgess, E. M.
2017-12-01
Currently, kriging of spatial-temporal data is slow and limited to relatively small vector sizes. We have developed a method on the Bridges supercomputer, at the Pittsburgh supercomputer center, which uses a combination of the tools R, Fortran, the Message Passage Interface (MPI), OpenACC, and special R packages for big data. This combination of tools now permits us to complete tasks which could previously not be completed, or takes literally hours to complete. We ran simulation studies from a laptop against the supercomputer. We also look at "real world" data sets, such as the Irish wind data, and some weather data. We compare the timings. We note that the timings are suprising good.
Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.
Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu
2009-07-01
The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.
Diagnostic Peptide Discovery: Prioritization of Pathogen Diagnostic Markers Using Multiple Features
Carmona, Santiago J.; Sartor, Paula A.; Leguizamón, María S.; Campetella, Oscar E.; Agüero, Fernán
2012-01-01
The availability of complete pathogen genomes has renewed interest in the development of diagnostics for infectious diseases. Synthetic peptide microarrays provide a rapid, high-throughput platform for immunological testing of potential B-cell epitopes. However, their current capacity prevent the experimental screening of complete “peptidomes”. Therefore, computational approaches for prediction and/or prioritization of diagnostically relevant peptides are required. In this work we describe a computational method to assess a defined set of molecular properties for each potential diagnostic target in a reference genome. Properties such as sub-cellular localization or expression level were evaluated for the whole protein. At a higher resolution (short peptides), we assessed a set of local properties, such as repetitive motifs, disorder (structured vs natively unstructured regions), trans-membrane spans, genetic polymorphisms (conserved vs. divergent regions), predicted B-cell epitopes, and sequence similarity against human proteins and other potential cross-reacting species (e.g. other pathogens endemic in overlapping geographical locations). A scoring function based on these different features was developed, and used to rank all peptides from a large eukaryotic pathogen proteome. We applied this method to the identification of candidate diagnostic peptides in the protozoan Trypanosoma cruzi, the causative agent of Chagas disease. We measured the performance of the method by analyzing the enrichment of validated antigens in the high-scoring top of the ranking. Based on this measure, our integrative method outperformed alternative prioritizations based on individual properties (such as B-cell epitope predictors alone). Using this method we ranked 10 million 12-mer overlapping peptides derived from the complete T. cruzi proteome. Experimental screening of 190 high-scoring peptides allowed the identification of 37 novel epitopes with diagnostic potential, while none of the low scoring peptides showed significant reactivity. Many of the metrics employed are dependent on standard bioinformatic tools and data, so the method can be easily extended to other pathogen genomes. PMID:23272069
A novel method for purification of the endogenously expressed fission yeast Set2 complex.
Suzuki, Shota; Nagao, Koji; Obuse, Chikashi; Murakami, Yota; Takahata, Shinya
2014-05-01
Chromatin-associated proteins are heterogeneously and dynamically composed. To gain a complete understanding of DNA packaging and basic nuclear functions, it is important to generate a comprehensive inventory of these proteins. However, biochemical purification of chromatin-associated proteins is difficult and is accompanied by concerns over complex stability, protein solubility and yield. Here, we describe a new method for optimized purification of the endogenously expressed fission yeast Set2 complex, histone H3K36 methyltransferase. Using the standard centrifugation procedure for purification, approximately half of the Set2 protein separated into the insoluble chromatin pellet fraction, making it impossible to recover the large amounts of soluble Set2. To overcome this poor recovery, we developed a novel protein purification technique termed the filtration/immunoaffinity purification/mass spectrometry (FIM) method, which eliminates the need for centrifugation. Using the FIM method, in which whole cell lysates were filtered consecutively through eight different pore sizes (53-0.8μm), a high yield of soluble FLAG-tagged Set2 was obtained from fission yeast. The technique was suitable for affinity purification and produced a low background. A mass spectrometry analysis of anti-FLAG immunoprecipitated proteins revealed that Rpb1, Rpb2 and Rpb3, which have all been reported previously as components of the budding yeast Set2 complex, were isolated from fission yeast using the FIM method. In addition, other subunits of RNA polymerase II and its phosphatase were also identified. In conclusion, the FIM method is valid for the efficient purification of protein complexes that separate into the insoluble chromatin pellet fraction during centrifugation. Copyright © 2014 Elsevier Inc. All rights reserved.
Core outcome sets in women's and newborn health: a systematic review.
Duffy, Jmn; Rolph, R; Gale, C; Hirsch, M; Khan, K S; Ziebland, S; McManus, R J
2017-09-01
Variation in outcome collection and reporting is a serious hindrance to progress in our specialty; therefore, over 80 journals have come together to support the development, dissemination, and implementation of core outcome sets. This study systematically reviewed and characterised registered, progressing, or completed core outcome sets relevant to women's and newborn health. Systematic search using the Core Outcome Measures in Effectiveness Trial initiative and the Core Outcomes in Women's and Newborn Health initiative databases. Registry entries, protocols, systematic reviews, and core outcome sets. Descriptive statistics to describe characteristics and results. There were 49 core outcome sets registered in maternal and newborn health, with the majority registered in 2015 (n = 22; 48%) or 2016 (n = 16; 32%). Benign gynaecology (n = 8; 16%) and newborn health (n = 3; 6%) are currently under-represented. Twenty-four (52%) core outcome sets were funded by international (n = 1; <1%), national (n = 18; 38%), and regional (n = 4; 8%) bodies. Seven protocols were published. Twenty systematic reviews have characterised the inconsistency in outcome reporting across a broad range of relevant healthcare conditions. Four core outcome sets were completed: reconstructive breast surgery (11 outcomes), preterm birth (13 outcomes), epilepsy in pregnancy (29 outcomes), and maternity care (48 outcomes). The quantitative, qualitative, and consensus methods used to develop core outcome sets varied considerably. Core outcome sets are currently being developed across women's and newborn health, although coverage of topics is variable. Development of further infrastructure to develop, disseminate, and implement core outcome sets is urgently required. Forty-nine women's and newborn core outcome sets registered. 50% funded. 7 protocols, 20 systematic reviews, and 4 core outcome sets published. @coreoutcomes @jamesmnduffy. © 2017 Royal College of Obstetricians and Gynaecologists.
Gulmans, J; Vollenbroek-Hutten, M M R; Van Gemert-Pijnen, J E W C; Van Harten, W H
2007-10-01
Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment methods are often one-sided evaluations not appropriate for integrated care settings. We developed an evaluation approach that takes into account the multiple communication links and evaluation perspectives inherent to these settings. In this study, we describe this approach, using the integrated care setting of Cerebral Palsy as illustration. The approach follows a three-step mixed design in which the results of each step are used to mark out the subsequent step's focus. The first step patient questionnaire aims to identify quality gaps experienced by patients, comparing their expectancies and experiences with respect to patient-professional and inter-professional communication. Resulting gaps form the input of in-depth interviews with a subset of patients to evaluate underlying factors of ineffective communication. Resulting factors form the input of the final step's focus group meetings with professionals to corroborate and complete the findings. By combining methods, the presented approach aims to minimize limitations inherent to the application of single methods. The comprehensiveness of the approach enables its applicability in various integrated care settings. Its sequential design allows for in-depth evaluation of relevant quality gaps. Further research is needed to evaluate the approach's feasibility in practice. In our subsequent study, we present the results of the approach in the integrated care setting of children with Cerebral Palsy in three Dutch care regions.
NASA Astrophysics Data System (ADS)
Al-Farisi, B. L.; Tjandrakirana; Agustini, R.
2018-01-01
Student’s communication skill paid less attention in learning activity at school, even though communication skill is needed by students in the 21st century based on the demands of new curriculum in Indonesia (K13). This study focuses on drilling students’ communication skill through science, environment, technology, and society (SETS)-based learning. The research is a pre-experimental design with a one-shot case study model involving 10 students of ninth-grader of SMPN 2 Manyar, Gresik. The research data were collected through observation method using communication observation sheet. The data were analyzed using the descriptive qualitative method. The result showed that students’ communication skill reached the completeness of skills decided both individually and classically in the curriculum. The fundamental result of this research that SETS-based learning can be used to drill students’ communication skill in K13 context.
Coenen, Samuel; Ferech, Matus; Haaijer‐Ruskamp, Flora M; Butler, Chris C; Stichele, Robert H Vander; Verheij, Theo J M; Monnet, Dominique L; Little, Paul; Goossens, Herman
2007-01-01
Background and objective Indicators to measure the quality of healthcare are increasingly used by healthcare professionals and policy makers. In the context of increasing antimicrobial resistance, this study aimed to develop valid drug‐specific quality indicators for outpatient antibiotic use in Europe, derived from European Surveillance of Antimicrobial Consumption (ESAC) data. Methods 27 experts (15 countries), in a European Science Foundation workshop, built on the expertise within the European Drug Utilisation Research Group, the General Practice Respiratory Infections Network, the ESCMID Study Group on Primary Care Topics, the Belgian Antibiotic Policy Coordination Committee, the World Health Organization, ESAC, and other experts. A set of proposed indicators was developed using 1997–2003 ESAC data. Participants scored the relevance of each indicator to reducing antimicrobial resistance, patient health benefit, cost effectiveness and public health policy makers (scale: 1 (completely disagree) to 9 (completely agree)). The scores were processed according to the UCLA‐RAND appropriateness method. Indicators were judged relevant if the median score was not in the 1–6 interval and if there was consensus (number of scores within the 1–3 interval was fewer than one third of the panel). From the relevant indicators providing overlapping information, the one with the highest scores was selected for the final set of quality indicators—values were updated with 2004 ESAC data. Results 22 participants (12 countries) completed scoring of a set of 22 proposed indicators. Nine were rated as relevant antibiotic prescribing indicators on all four dimensions; five were rated as relevant if only relevance to reducing antimicrobial resistance and public health policy makers was taken into account. A final set of 12 indicators was selected. Conclusion 12 of the proposed ESAC‐based quality indicators for outpatient antibiotic use in Europe have face validity and are potentially applicable. These indicators could be used to better describe antibiotic use in ambulatory care and assess the quality of national antibiotic prescribing patterns in Europe. PMID:18055888
The Researches on Cycle-Changeable Generation Settlement Method
NASA Astrophysics Data System (ADS)
XU, Jun; LONG, Suyan; LV, Jianhu
2018-03-01
Through the analysis of the business characteristics and problems of price adjustment, a cycle-changeable generation settlement method is proposed to support any time cycle settlement, and put forward a complete set of solutions, including the creation of settlement tasks, time power dismantle, generating fixed cycle of electricity, net energy split. At the same time, the overall design flow of cycle-changeable settlement is given. This method supports multiple price adjustments during the month, and also is an effective solution to the cost reduction of month-after price adjustment.
Conditional data watchpoint management
Burdick, Dean Joseph; Vaidyanathan, Basu
2010-08-24
A method, system and computer program product for managing a conditional data watchpoint in a set of instructions being traced is shown in accordance with illustrative embodiments. In one particular embodiment, the method comprises initializing a conditional data watchpoint and determining the watchpoint has been encountered. Upon that determination, examining a current instruction context associated with the encountered watchpoint prior to completion of the current instruction execution, further determining a first action responsive to a positive context examination; otherwise, determining a second action.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Using rewards and penalties to obtain desired subject performance
NASA Technical Reports Server (NTRS)
Cook, M.; Jex, H. R.; Stein, A. C.; Allen, R. W.
1981-01-01
Operant conditioning procedures, specifically the use of negative reinforcement, in achieving stable learning behavior is described. The critical tracking test (CTT) a method of detecting human operator impairment was tested. A pass level is set for each subject, based on that subject's asymptotic skill level while sober. It is critical that complete training take place before the individualized pass level is set in order that the impairment can be detected. The results provide a more general basis for the application of reward/penalty structures in manual control research.
Hastings, Justine F.; Bryant, Jennifer E.
2015-01-01
Objective. To examine pharmacy students’ ownership of, use of, and preference for using a mobile device in a practice setting. Methods. Eighty-one pharmacy students were recruited and completed a pretest that collected information about their demographics and mobile devices and also had them rank the iPhone, iPad mini, and iPad for preferred use in a pharmacy practice setting. Students used the 3 devices to perform pharmacy practice-related tasks and then completed a posttest to again rank the devices for preferred use in a pharmacy practice setting. Results. The iPhone was the most commonly owned mobile device (59.3% of students), and the iPad mini was the least commonly owned (18.5%). About 70% of the students used their mobile devices at least once a week in a pharmacy practice setting. The iPhone was the most commonly used device in a practice setting (46.9% of students), and the iPod Touch was the least commonly used device (1.2%). The iPad mini was the most preferred device for use in a pharmacy practice setting prior to performing pharmacy practice-related tasks (49.4% of students), and was preferred by significantly more students after performing the tasks (70.4%). Conclusion. Pharmacy students commonly use their mobile devices in pharmacy practice settings and most selected the iPad mini as the preferred device for use in a practice setting even though it was the device owned by the fewest students. PMID:25861103
Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data
NASA Astrophysics Data System (ADS)
Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.
2016-06-01
This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.
DFT simulations and vibrational spectra of 2-amino-2-methyl-1,3-propanediol.
Renuga Devi, T S; Sharmi kumar, J; Ramkumaar, G R
2014-12-10
The FTIR and FT-Raman spectra of 2-amino-2-methyl-1,3-propanediol were recorded in the regions 4000-400cm(-1) and 4000-50cm(-1) respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and density functional method (B3LYP) with the augmented-correlation consistent-polarized valence double zeta (aug-cc-pVDZ) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed on the basis of the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Mulliken charges were calculated using both Hartee-Fock and density functional method using the aug-cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. (1)H and (13)C NMR chemical shifts of the molecule were calculated using Gauge-Independent Atomic Orbital (GIAO) method and were compared with experimental results. Copyright © 2014 Elsevier B.V. All rights reserved.
A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction
Yan, Yiming; Gao, Fengjiao; Deng, Shupei; Su, Nan
2017-01-01
In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM), which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images. PMID:28125018
A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction.
Yan, Yiming; Gao, Fengjiao; Deng, Shupei; Su, Nan
2017-01-24
In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM), which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed 'occlusions of random textures model' are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.
Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-01-01
Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866
Completed Beltrami-Michell formulation for analyzing mixed boundary value problems in elasticity
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Kaljevic, Igor; Hopkins, Dale A.; Saigal, Sunil
1995-01-01
In elasticity, the method of forces, wherein stress parameters are considered as the primary unknowns, is known as the Beltrami-Michell formulation (BMF). The existing BMF can only solve stress boundary value problems; it cannot handle the more prevalent displacement of mixed boundary value problems of elasticity. Therefore, this formulation, which has restricted application, could not become a true alternative to the Navier's displacement method, which can solve all three types of boundary value problems. The restrictions in the BMF have been alleviated by augmenting the classical formulation with a novel set of conditions identified as the boundary compatibility conditions. This new method, which completes the classical force formulation, has been termed the completed Beltrami-Michell formulation (CBMF). The CBMF can solve general elasticity problems with stress, displacement, and mixed boundary conditions in terms of stresses as the primary unknowns. The CBMF is derived from the stationary condition of the variational functional of the integrated force method. In the CBMF, stresses for kinematically stable structures can be obtained without any reference to the displacements either in the field or on the boundary. This paper presents the CBMF and its derivation from the variational functional of the integrated force method. Several examples are presented to demonstrate the applicability of the completed formulation for analyzing mixed boundary value problems under thermomechanical loads. Selected example problems include a cylindrical shell wherein membrane and bending responses are coupled, and a composite circular plate.
ERIC Educational Resources Information Center
Mulvaney, Caroline A.; Watson, Michael C.; Walsh, Patrick
2013-01-01
Objective: To examine the provision of practical safety education by Child Safety Education Coalition (CSEC) organizations in England. Design: A postal survey. Setting: Providers of child practical safety education who were also part of CSEC. Methods: In February 2010 all CSEC organizations were sent a self-completion postal questionnaire which…
ERIC Educational Resources Information Center
Dennis, Leslie K.; Lowe, John B.; Snetselaar, Linda G.
2009-01-01
Objective: To examine the importance of tanning among students in relation to attitudes and knowledge regarding skin cancer prevention. Design: A cross-sectional survey. Setting: College students at a major Midwestern university. Methods: Students were recruited to complete a self-administered questionnaire that included information on…
Addressing Size Stereotypes: A Weight Bias and Weight-Related Teasing Intervention among Adolescents
ERIC Educational Resources Information Center
Miyairi, Maya; Reel, Justine J.; Próspero, Moisés; Okang, Esther N.
2015-01-01
Purpose: The purpose of this study was to evaluate a weight-related teasing prevention program implemented for both female and male students in a school setting. Methods: Junior High School students (N = 143) in seventh grade were invited to participate in the program. One hundred eighteen participants completed pre- and posttest surveys to assess…
ERIC Educational Resources Information Center
Rada, Robert E.
2013-01-01
Individuals with autism can be quite challenging to treat in a routine dental-office setting, especially when extensive dental treatment and disruptive behavioral issues exist. Individuals with autism may also be at higher risk for oral disease. Frequently, general anesthesia is the only method to facilitate completion of the needed dental…
ERIC Educational Resources Information Center
Rodriguez, Eva L.
2009-01-01
The popularity of using online instruction (both in blended and complete distance learning) in higher education settings is increasing (Appana, 2008; Newton, 2006; Oh, 2006). Occupational therapy educators are using blended learning methods under the assumption that this learning platform will facilitate in their students the required level of…
Partitioning error components for accuracy-assessment of near-neighbor methods of imputation
Albert R. Stage; Nicholas L. Crookston
2007-01-01
Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...
ERIC Educational Resources Information Center
Schneider, E. W.
The Interface System is a comprehensive method for developing and managing computer-assisted instructional courses or computer-managed instructional courses composed of sets of instructional modules. Each module is defined by one or more behavioral objectives and by a list of prerequisite modules that must be completed successfully before the…
Nutrition Education and Body Mass Index in Grades K-12: A Systematic Review
ERIC Educational Resources Information Center
Price, Cayla; Cohen, Deborah; Pribis, Peter; Cerami, Jean
2017-01-01
Background: Overweight and obese body mass index (BMI) status affects an increasing number of children in the United States. The school setting has been identified as a focus area to implement obesity prevention programs. Methods: A database search of PubMed, Education Search Complete, and Cumulative Index to Nursing and Allied Health Literature…
Effect of hydration on the stability of fullerene-like silica molecules
NASA Astrophysics Data System (ADS)
Filonenko, O. V.; Lobanov, V. V.
2011-05-01
The hydration of fullerene-like silica molecules was studied by the density functional method (exchange-correlation functional B3LYP, basis set 6-31G**). It was demonstrated that completely coordinated structures transform to more stable hydroxylated ones during hydrolysis. These in turn react with H2O molecules with the formation of hydrogen bonds.
Young Children's Attitudes toward Peers with Intellectual Disabilities: Effect of the Type of School
ERIC Educational Resources Information Center
Georgiadi, Maria; Kalyva, Efrosini; Kourkoutas, Elias; Tsakiris, Vlastaris
2012-01-01
Background: This study explored typically developing children's attitudes towards peers with intellectual disabilities, with special reference to the type of school they attended. Materials and Methods: Two hundred and fifty-six Greek children aged 9-10 (135 in inclusive settings) completed a questionnaire and an adjective list by Gash ("European…
ERIC Educational Resources Information Center
Githembe, Purity Kanini
2009-01-01
The purpose of this study was to examine involvement of African refugee parents in the education of their elementary school children. The setting of the study was Northern and Southern Texas. African refugee parents and their children's teachers completed written surveys and also participated in interviews. In the study's mixed-method design,…
ERIC Educational Resources Information Center
Dinsmore, Daniel L.; Parkinson, Meghan M.
2013-01-01
Although calibration has been widely studied, questions remain about how best to capture confidence ratings, how to calculate continuous variable calibration indices, and on what exactly students base their reported confidence ratings. Undergraduates in a research methods class completed a prior knowledge assessment, two sets of readings and…
Missing value imputation in DNA microarrays based on conjugate gradient method.
Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh
2012-02-01
Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.
The use of the wavelet cluster analysis for asteroid family determination
NASA Technical Reports Server (NTRS)
Benjoya, Phillippe; Slezak, E.; Froeschle, Claude
1992-01-01
The asteroid family determination has been analysis method dependent for a longtime. A new cluster analysis based on the wavelet transform has allowed an automatic definition of families with a degree of significance versus randomness. Actually this method is rather general and can be applied to any kind of structural analysis. We will rather concentrate on the main features of the method. The analysis has been performed on the set of 4100 asteroid proper elements computed by Milani and Knezevic (see Milani and Knezevic 1990). Twenty one families have been found and influence of the chosen metric has been tested. The results have beem compared to Zappala et al.'s ones (see Zappala et al 1990) obtained by the use of a completely different method applied to the same set of data. For the first time, a good overlapping has been found between both method results, not only for the big well known families but also for the smallest ones.
Akram, Pakeeza; Liao, Li
2017-12-06
Identification of common genes associated with comorbid diseases can be critical in understanding their pathobiological mechanism. This work presents a novel method to predict missing common genes associated with a disease pair. Searching for missing common genes is formulated as an optimization problem to minimize network based module separation from two subgraphs produced by mapping genes associated with disease onto the interactome. Using cross validation on more than 600 disease pairs, our method achieves significantly higher average receiver operating characteristic ROC Score of 0.95 compared to a baseline ROC score 0.60 using randomized data. Missing common genes prediction is aimed to complete gene set associated with comorbid disease for better understanding of biological intervention. It will also be useful for gene targeted therapeutics related to comorbid diseases. This method can be further considered for prediction of missing edges to complete the subgraph associated with disease pair.
PathogenFinder--distinguishing friend from foe using bacterial whole genome sequence data.
Cosentino, Salvatore; Voldby Larsen, Mette; Møller Aarestrup, Frank; Lund, Ole
2013-01-01
Although the majority of bacteria are harmless or even beneficial to their host, others are highly virulent and can cause serious diseases, and even death. Due to the constantly decreasing cost of high-throughput sequencing there are now many completely sequenced genomes available from both human pathogenic and innocuous strains. The data can be used to identify gene families that correlate with pathogenicity and to develop tools to predict the pathogenicity of newly sequenced strains, investigations that previously were mainly done by means of more expensive and time consuming experimental approaches. We describe PathogenFinder (http://cge.cbs.dtu.dk/services/PathogenFinder/), a web-server for the prediction of bacterial pathogenicity by analysing the input proteome, genome, or raw reads provided by the user. The method relies on groups of proteins, created without regard to their annotated function or known involvement in pathogenicity. The method has been built to work with all taxonomic groups of bacteria and using the entire training-set, achieved an accuracy of 88.6% on an independent test-set, by correctly classifying 398 out of 449 completely sequenced bacteria. The approach here proposed is not biased on sets of genes known to be associated with pathogenicity, thus the approach could aid the discovery of novel pathogenicity factors. Furthermore the pathogenicity prediction web-server could be used to isolate the potential pathogenic features of both known and unknown strains.
NASA Astrophysics Data System (ADS)
Renuga Devi, T. S.; Sharmi kumar, J.; Ramkumaar, G. R.
2015-02-01
The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm-1 and 4000-50 cm-1 respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. 1H and 13C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule.
Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.
Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao
2017-11-01
Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.
Implementation of a Flipped Classroom for Nuclear Medicine Physician CME.
Komarraju, Aparna; Bartel, Twyla B; Dickinson, Lisa A; Grant, Frederick D; Yarbrough, Tracy L
2018-06-21
Increasingly, emerging technologies are expanding instructional possibilities, with new methods being adopted to improve knowledge acquisition and retention. Within medical education, many new techniques have been employed in the undergraduate setting, with less utilization thus far in the continuing medical education (CME) sphere. This paper discusses the use of a new method for CME-the "flipped classroom," widely used in undergraduate medical education. This method engages learners by providing content before the live ("in class") session that aids in preparation and fosters in-class engagement. A flipped classroom method was employed using an online image-rich case-based module and quiz prior to a live CME session at a national nuclear medicine meeting. The preparatory material provided a springboard for in-depth discussion at the live session-a case-based activity utilizing audience response technology. Study participants completed a survey regarding their initial experience with this new instructional method. In addition, focus group interviews were conducted with session attendees who had or had not completed the presession material; transcripts were qualitatively analyzed. Quantitative survey data (completed by two-thirds of the session attendees) suggested that the flipped method was highly valuable and met attendee educational objectives. Analysis of focus group data yielded six themes broadly related to two categories-benefits of the flipped method for CME and programmatic considerations for successfully implementing the flipped method in CME. Data from this study have proven encouraging and support further investigations around the incorporation of this innovative teaching method into CME for nuclear imaging specialists.
Li Manni, Giovanni; Smart, Simon D; Alavi, Ali
2016-03-08
A novel stochastic Complete Active Space Self-Consistent Field (CASSCF) method has been developed and implemented in the Molcas software package. A two-step procedure is used, in which the CAS configuration interaction secular equations are solved stochastically with the Full Configuration Interaction Quantum Monte Carlo (FCIQMC) approach, while orbital rotations are performed using an approximated form of the Super-CI method. This new method does not suffer from the strong combinatorial limitations of standard MCSCF implementations using direct schemes and can handle active spaces well in excess of those accessible to traditional CASSCF approaches. The density matrix formulation of the Super-CI method makes this step independent of the size of the CI expansion, depending exclusively on one- and two-body density matrices with indices restricted to the relatively small number of active orbitals. No sigma vectors need to be stored in memory for the FCIQMC eigensolver--a substantial gain in comparison to implementations using the Davidson method, which require three or more vectors of the size of the CI expansion. Further, no orbital Hessian is computed, circumventing limitations on basis set expansions. Like the parent FCIQMC method, the present technique is scalable on massively parallel architectures. We present in this report the method and its application to the free-base porphyrin, Mg(II) porphyrin, and Fe(II) porphyrin. In the present study, active spaces up to 32 electrons and 29 orbitals in orbital expansions containing up to 916 contracted functions are treated with modest computational resources. Results are quite promising even without accounting for the correlation outside the active space. The systems here presented clearly demonstrate that large CASSCF calculations are possible via FCIQMC-CASSCF without limitations on basis set size.
Rahim, Ruzairi Abdul; Fazalul Rahiman, Mohd Hafiz; Leong, Lai Chen; Chan, Kok San; Pang, Jon Fea
2008-01-01
The main objective of this project is to implement the multiple fan beam projection technique using optical fibre sensors with the aim to achieve a high data acquisition rate. Multiple fan beam projection technique here is defined as allowing more than one emitter to transmit light at the same time using the switch-mode fan beam method. For the thirty-two pairs of sensors used, the 2-projection technique and 4-projection technique are being investigated. Sixteen sets of projections will complete one frame of light emission for the 2-projection technique while eight sets of projection will complete one frame of light emission for the 4-projection technique. In order to facilitate data acquisition process, PIC microcontroller and the sample and hold circuit are being used. This paper summarizes the hardware configuration and design for this project. PMID:27879885
2008-09-01
related scenarios related to US armed forces around the world . In the civilian setting, complete decontamination is the only accepted criteria before a...observation, a door for sample introduction, and four ports on the front panel for sensor placement. All glass surfaces were covered when used with CD gas...1999, 281, 1735-1745. 9. AOAC International Method 966.04; Official Methods of Analisis , 21’t ed.; Chapter 6: AOAC International: Gaithersburg, MD
Method and apparatus for checking the stability of a setup for making reflection type holograms
NASA Technical Reports Server (NTRS)
Lackner, H. G. (Inventor)
1974-01-01
A method and apparatus are described for checking the stability of a setup for recording reflection-type (white light) holograms. Two sets of interference fringes are simultaneously obtained, one giving information about coherence and stability of the setup alone and the other demonstrating coherence of the entire system, including the holographic recording plate. Special emphasis is given to the stability of the plate, due to the fact that any minute vibration might severely degrade or completely destroy the recording.
Teasdale, Luisa C; Köhler, Frank; Murray, Kevin D; O'Hara, Tim; Moussalli, Adnan
2016-09-01
The qualification of orthology is a significant challenge when developing large, multiloci phylogenetic data sets from assembled transcripts. Transcriptome assemblies have various attributes, such as fragmentation, frameshifts and mis-indexing, which pose problems to automated methods of orthology assessment. Here, we identify a set of orthologous single-copy genes from transcriptome assemblies for the land snails and slugs (Eupulmonata) using a thorough approach to orthology determination involving manual alignment curation, gene tree assessment and sequencing from genomic DNA. We qualified the orthology of 500 nuclear, protein-coding genes from the transcriptome assemblies of 21 eupulmonate species to produce the most complete phylogenetic data matrix for a major molluscan lineage to date, both in terms of taxon and character completeness. Exon capture targeting 490 of the 500 genes (those with at least one exon >120 bp) from 22 species of Australian Camaenidae successfully captured sequences of 2825 exons (representing all targeted genes), with only a 3.7% reduction in the data matrix due to the presence of putative paralogs or pseudogenes. The automated pipeline Agalma retrieved the majority of the manually qualified 500 single-copy gene set and identified a further 375 putative single-copy genes, although it failed to account for fragmented transcripts resulting in lower data matrix completeness when considering the original 500 genes. This could potentially explain the minor inconsistencies we observed in the supported topologies for the 21 eupulmonate species between the manually curated and 'Agalma-equivalent' data set (sharing 458 genes). Overall, our study confirms the utility of the 500 gene set to resolve phylogenetic relationships at a range of evolutionary depths and highlights the importance of addressing fragmentation at the homolog alignment stage for probe design. © 2016 John Wiley & Sons Ltd.
Eye-tracking-based assessment of cognitive function in low-resource settings.
Forssman, Linda; Ashorn, Per; Ashorn, Ulla; Maleta, Kenneth; Matchado, Andrew; Kortekangas, Emma; Leppänen, Jukka M
2017-04-01
Early development of neurocognitive functions in infants can be compromised by poverty, malnutrition and lack of adequate stimulation. Optimal management of neurodevelopmental problems in infants requires assessment tools that can be used early in life, and are objective and applicable across economic, cultural and educational settings. The present study examined the feasibility of infrared eye tracking as a novel and highly automated technique for assessing visual-orienting and sequence-learning abilities as well as attention to facial expressions in young (9-month-old) infants. Techniques piloted in a high-resource laboratory setting in Finland (N=39) were subsequently field-tested in a community health centre in rural Malawi (N=40). Parents' perception of the acceptability of the method (Finland 95%, Malawi 92%) and percentages of infants completing the whole eye-tracking test (Finland 95%, Malawi 90%) were high, and percentages of valid test trials (Finland 69-85%, Malawi 68-73%) satisfactory at both sites. Test completion rates were slightly higher for eye tracking (90%) than traditional observational tests (87%) in Malawi. The predicted response pattern indicative of specific cognitive function was replicated in Malawi, but Malawian infants exhibited lower response rates and slower processing speed across tasks. High test completion rates and the replication of the predicted test patterns in a novel environment in Malawi support the feasibility of eye tracking as a technique for assessing infant development in low-resource setting. Further research is needed to the test-retest stability and predictive validity of the eye-tracking scores in low-income settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Gijsbers, H J H; Lauret, G J; van Hofwegen, A; van Dockum, T A; Teijink, J A W; Hendriks, H J M
2016-06-01
The aim of the study was to develop quality indicators (QIs) for physiotherapy management of patients with intermittent claudication (IC) in the Netherlands. As part of an international six-step method to develop QIs, an online survey Delphi-procedure was completed. After two Delphi-rounds a validation round was performed. Twenty-six experts were recruited to participate in this study. Twenty-four experts completed two Delphi-rounds. A third round was conducted inviting 1200 qualified and registered physiotherapists of the Dutch integrated care network 'Claudicationet' to validate a draft set of quality indicators. Out of 83 potential QIs in the Dutch physiotherapy guideline on 'Intermittent claudication', consensus among the experts selected nine indicators. All nine quality indicators were validated by 300 physiotherapists. A final set of nine indicators was derived from (1) a Dutch evidence-based physiotherapy guideline, (2) an expert Delphi procedure and (3) a validation by 300 physiotherapists. This set of indicators should be validated in clinical practice. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
Capturing Reality at Centre Block
NASA Astrophysics Data System (ADS)
Boulanger, C.; Ouimet, C.; Yeomans, N.
2017-08-01
The Centre Block of Canada's Parliament buildings, National Historic Site of Canada is set to undergo a major rehabilitation project that will take approximately 10 years to complete. In preparation for this work, Heritage Conservation Services (HCS) of Public Services and Procurement Canada has been completing heritage documentation of the entire site which includes laser scanning of all interior rooms and accessible confined spaces such as attics and other similar areas. Other documentation completed includes detailed photogrammetric documentation of rooms and areas of high heritage value. Some of these high heritage value spaces present certain challenges such as accessibility due to the height and the size of the spaces. Another challenge is the poor lighting conditions, requiring the use of flash or strobe lighting to either compliment or completely eliminate the available ambient lighting. All the spaces captured at this higher level of detail were also captured with laser scanning. This allowed the team to validate the information and conduct a quality review of the photogrammetric data. As a result of this exercise, the team realized that in most, if not all cases, the photogrammetric data was more detailed and at a higher quality then the terrestrial laser scanning data. The purpose and motivation of this paper is to present these findings, as well provide the advantages and disadvantages of the two methods and data sets.
Cecil, L.D.; Knobel, L.L.; Wegner, S.J.; Moore, L.L.
1989-01-01
Water from four wells completed in the Snake River Plain aquifer was sampled as part of the U.S. Geological Survey 's quality assurance program to evaluate the effect of filtration and preservation methods on strontium-90 concentrations in groundwater at the Idaho National Engineering Laboratory. Water from each well was filtered through either a 0.45-micrometer membrane or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered samples was preserved in the field with reagent-grade hydrochloric acid and the other set of samples was not acidified. For water from wells with strontium-90 concentrations at or above the reporting level, 94% or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that within-laboratory reproducibility for strontium-90 in groundwater at the INEL is not significantly affected by changes in filtration and preservation methods used for sample collections. (USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cecil, L.D.; Knobel, L.L.; Wegner, S.J.
1989-01-01
Water from four wells completed in the Snake River Plain aquifer was sampled as part of the US Geological Survey's quality assurance program to evaluate the effect of filtration and preservation methods on strontium-90 concentrations in groundwater at the Idaho National Engineering Laboratory. Water from each well was filtered through either a 0.45-micrometer membrane or a 0.1-micrometer membrane filter; unfiltered samples also were collected. Two sets of filtered and two sets of unfiltered samples was preserved in the field with reagent-grade hydrochloric acid and the other set of samples was not acidified. For water from wells with strontium-90 concentrations atmore » or above the reporting level, 94% or more of the strontium-90 is in true solution or in colloidal particles smaller than 0.1 micrometer. These results suggest that within-laboratory reproducibility for strontium-90 in groundwater at the INEL is not significantly affected by changes in filtration and preservation methods used for sample collections. 13 refs., 2 figs., 6 tabs.« less
Mutually orthogonal Latin squares from the inner products of vectors in mutually unbiased bases
NASA Astrophysics Data System (ADS)
Hall, Joanne L.; Rao, Asha
2010-04-01
Mutually unbiased bases (MUBs) are important in quantum information theory. While constructions of complete sets of d + 1 MUBs in {\\bb C}^d are known when d is a prime power, it is unknown if such complete sets exist in non-prime power dimensions. It has been conjectured that complete sets of MUBs only exist in {\\bb C}^d if a maximal set of mutually orthogonal Latin squares (MOLS) of side length d also exists. There are several constructions (Roy and Scott 2007 J. Math. Phys. 48 072110; Paterek, Dakić and Brukner 2009 Phys. Rev. A 79 012109) of complete sets of MUBs from specific types of MOLS, which use Galois fields to construct the vectors of the MUBs. In this paper, two known constructions of MUBs (Alltop 1980 IEEE Trans. Inf. Theory 26 350-354 Wootters and Fields 1989 Ann. Phys. 191 363-381), both of which use polynomials over a Galois field, are used to construct complete sets of MOLS in the odd prime case. The MOLS come from the inner products of pairs of vectors in the MUBs.
Resource-constrained scheduling with hard due windows and rejection penalties
NASA Astrophysics Data System (ADS)
Garcia, Christopher
2016-09-01
This work studies a scheduling problem where each job must be either accepted and scheduled to complete within its specified due window, or rejected altogether. Each job has a certain processing time and contributes a certain profit if accepted or penalty cost if rejected. There is a set of renewable resources, and no resource limit can be exceeded at any time. Each job requires a certain amount of each resource when processed, and the objective is to maximize total profit. A mixed-integer programming formulation and three approximation algorithms are presented: a priority rule heuristic, an algorithm based on the metaheuristic for randomized priority search and an evolutionary algorithm. Computational experiments comparing these four solution methods were performed on a set of generated benchmark problems covering a wide range of problem characteristics. The evolutionary algorithm outperformed the other methods in most cases, often significantly, and never significantly underperformed any method.
NASA Astrophysics Data System (ADS)
Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.
1998-03-01
Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-12
... consisting of twelve consecutive complete month data sets of the documents and related indexing information....\\11\\ The MSRB proposes to charge $10,000 for any twelve consecutive complete month data set for the... data set for the Continuing Disclosure Historical Data Product.\\12\\ In general, no smaller data sets...
Probabilistic topic modeling for the analysis and classification of genomic sequences
2015-01-01
Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734
ERIC Educational Resources Information Center
Edwards, Susan L.; Rapee, Ronald M.; Kennedy, Susan
2010-01-01
Background: Little is known about risk factors for anxiety in young children. The current study investigated the value of a set of theoretically derived risk factors to predict symptoms of anxiety in a sample of preschool-aged children. Methods: Mothers (n = 632) and fathers (n = 249) completed questionnaires twice, 12 months apart. Measures were…
ERIC Educational Resources Information Center
Trepka, Mary Jo; Newman, Frederick L.; Huffman, Fatma G.; Dixon, Zisca
2010-01-01
Objective: To assess acceptability of food safety education delivered by interactive multimedia (IMM) in a Supplemental Nutrition Program for Women, Infants and Children Program (WIC) clinic. Methods: Female clients or caregivers (n = 176) completed the food-handling survey; then an IMM food safety education program on a computer kiosk.…
ERIC Educational Resources Information Center
Blank, Rolf K.; Smithson, John
2010-01-01
Beginning in summer 2009, the complete set of NAEP student assessment items for grades 4 and 8 Science and Reading 2009 assessments were analyzed for comparison to the National Assessment of Educational Progress (NAEP) Item Specifications which are based on the NAEP Assessment Frameworks for these subjects (National Assessment Governing Board,…
ERIC Educational Resources Information Center
Van Hoye, A.; Heuzé, J.-P.; Larsen, T.; Sarrazin, P.
2016-01-01
Despite the call to improve health promotion (HP) in sport clubs in the existing literature, little is known about sport clubs' organizational capacity. Grounded within the setting-based framework, this study compares HP activities and guidance among 10 football clubs. At least three grassroots coaches from each club (n = 68) completed the Health…
Reeleder, David; Martin, Douglas K; Keresztes, Christian; Singer, Peter A
2005-01-01
Background Priority setting, also known as rationing or resource allocation, occurs at all levels of every health care system. Daniels and Sabin have proposed a framework for priority setting in health care institutions called 'accountability for reasonableness', which links priority setting to theories of democratic deliberation. Fairness is a key goal of priority setting. According to 'accountability for reasonableness', health care institutions engaged in priority setting have a claim to fairness if they satisfy four conditions of relevance, publicity, appeals/revision, and enforcement. This is the first study which has surveyed the views of hospital decision makers throughout an entire health system about the fairness of priority setting in their institutions. The purpose of this study is to elicit hospital decision-makers' self-report of the fairness of priority setting in their hospitals using an explicit conceptual framework, 'accountability for reasonableness'. Methods 160 Ontario hospital Chief Executive Officers, or their designates, were asked to complete a survey questionnaire concerning priority setting in their publicly funded institutions. Eight-six Ontario hospitals completed this survey, for a response rate of 54%. Six close-ended rating scale questions (e.g. Overall, how fair is priority setting at your hospital?), and 3 open-ended questions (e.g. What do you see as the goal(s) of priority setting in your hospital?) were used. Results Overall, 60.7% of respondents indicated their hospitals' priority setting was fair. With respect to the 'accountability for reasonableness' conditions, respondents indicated their hospitals performed best for the relevance (75.0%) condition, followed by appeals/revision (56.6%), publicity (56.0%), and enforcement (39.5%). Conclusions For the first time hospital Chief Executive Officers within an entire health system were surveyed about the fairness of priority setting practices in their institutions using the conceptual framework 'accountability for reasonableness'. Although many hospital CEOs felt that their priority setting was fair, ample room for improvement was noted, especially for the enforcement condition. PMID:15663792
Solvency supervision based on a total balance sheet approach
NASA Astrophysics Data System (ADS)
Pitselis, Georgios
2009-11-01
In this paper we investigate the adequacy of the own funds a company requires in order to remain healthy and avoid insolvency. Two methods are applied here; the quantile regression method and the method of mixed effects models. Quantile regression is capable of providing a more complete statistical analysis of the stochastic relationship among random variables than least squares estimation. The estimated mixed effects line can be considered as an internal industry equation (norm), which explains a systematic relation between a dependent variable (such as own funds) with independent variables (e.g. financial characteristics, such as assets, provisions, etc.). The above two methods are implemented with two data sets.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krungkrai, Sudaratana R.; Department of Molecular Protozoology, Research Institute for Microbial Diseases, Osaka University, 3-1 Yamadaoka, Suita, Osaka 565-0871; Tokuoka, Keiji
Orotidine 5′-monophosphate decarboxylase of human malaria parasite P. falciparum was crystallized by the seeding method in a hanging drop using PEG 3000 as a precipitant. A complete set of diffraction data from a native crystal was collected to 2.7 Å resolution at 100 K using synchrotron radiation. Orotidine 5′-monophosphate (OMP) decarboxylase (OMPDC; EC 4.1.1.23) catalyzes the final step in the de novo synthesis of uridine 5′-monophosphate (UMP) and defects in the enzyme are lethal in the malaria parasite Plasmodium falciparum. Active recombinant P. falciparum OMPDC (PfOMPDC) was crystallized by the seeding method in a hanging drop using PEG 3000 asmore » a precipitant. A complete set of diffraction data from a native crystal was collected to 2.7 Å resolution at 100 K using synchrotron radiation at the Swiss Light Source. The crystal exhibits trigonal symmetry (space group R3), with hexagonal unit-cell parameters a = b = 201.81, c = 44.03 Å. With a dimer in the asymmetric unit, the solvent content is 46% (V{sub M} = 2.3 Å{sup 3} Da{sup −1})« less
Mutually unbiased bases and semi-definite programming
NASA Astrophysics Data System (ADS)
Brierley, Stephen; Weigert, Stefan
2010-11-01
A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Gröbner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.
Full statistical mode reconstruction of a light field via a photon-number-resolved measurement
NASA Astrophysics Data System (ADS)
Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.
2017-05-01
We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
Draxten, Michelle; Flattum, Colleen; Fulkerson, Jayne
2016-10-01
The purpose of this study was to describe the components and use of motivational interviewing (MI) within a behavior change intervention to promote healthful eating and family meals and prevent childhood obesity. The Healthy Home Offerings via the Mealtime Environment (HOME) Plus intervention was part of a two-arm randomized-controlled trial and included 81 families (children 8-12 years old and their parents) in the intervention condition. The intervention included 10 monthly, 2-hour group sessions and 5 bimonthly motivational/goal-setting phone calls. Data were collected for intervention families only at each of the goal-setting calls and a behavior change assessment was administered at the 10th/final group session. Descriptive statistics were used to analyze the MI call data and behavior assessment. Overall group attendance was high (68% attending ≥7 sessions). Motivational/goal-setting phone calls were well accepted by parents, with an 87% average completion rate. More than 85% of the time, families reported meeting their chosen goal between calls. Families completing the behavioral assessment reported the most change in having family meals more often and improving home food healthfulness. Researchers should use a combination of delivery methods using MI when implementing behavior change programs for families to promote goal setting and healthful eating within pediatric obesity interventions.
Bauer, Sarah M.; McGuire, Alan B.; Kukla, Marina; McGuire, Shannon; Bair, Matthew J.; Matthias, Marianne S.
2017-01-01
Objective Goal setting is a common element of self-management support programs; however, little is known about the nature of patients' goals or how goals change during pain self-management. The purpose of the current study is to explore how patients' goals and views of goal setting change over the course of a peer-led pain self-management program. Methods Veterans (n = 16) completing a 4-month peer-led pain self-management program completed semi-structured interviews at baseline and follow-up regarding their goals for their pain. Interviews were analyzed using immersion/crystallization. Results Analyses revealed six themes: motivation to do something for their pain, more goal-oriented, actually setting goals, clarity of goal importance, more specific/measurable goal criteria, and more specific/measurable strategies. Conclusion The current analyses illustrate how participants' goals can evolve over the course of a peer-led pain self-management program. Specifically, increased motivation, more openness to using goals, greater clarity of goal importance, more specific and measurable goals and strategies, and the influence of the peer coach relationship were described by participants. Practice implications Pain self-management interventions should emphasize goal setting, and development of specific, measurable goals and plans. Trainings for providers should address the potential for the provider-patient relationship, particularly peer providers, to facilitate motivation and goal setting. PMID:27516437
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Priority setting: what constitutes success? A conceptual framework for successful priority setting
Sibbald, Shannon L; Singer, Peter A; Upshur, Ross; Martin, Douglas K
2009-01-01
Background The sustainability of healthcare systems worldwide is threatened by a growing demand for services and expensive innovative technologies. Decision makers struggle in this environment to set priorities appropriately, particularly because they lack consensus about which values should guide their decisions. One way to approach this problem is to determine what all relevant stakeholders understand successful priority setting to mean. The goal of this research was to develop a conceptual framework for successful priority setting. Methods Three separate empirical studies were completed using qualitative data collection methods (one-on-one interviews with healthcare decision makers from across Canada; focus groups with representation of patients, caregivers and policy makers; and Delphi study including scholars and decision makers from five countries). Results This paper synthesizes the findings from three studies into a framework of ten separate but interconnected elements germane to successful priority setting: stakeholder understanding, shifted priorities/reallocation of resources, decision making quality, stakeholder acceptance and satisfaction, positive externalities, stakeholder engagement, use of explicit process, information management, consideration of values and context, and revision or appeals mechanism. Conclusion The ten elements specify both quantitative and qualitative dimensions of priority setting and relate to both process and outcome components. To our knowledge, this is the first framework that describes successful priority setting. The ten elements identified in this research provide guidance for decision makers and a common language to discuss priority setting success and work toward improving priority setting efforts. PMID:19265518
Scaled MP3 non-covalent interaction energies agree closely with accurate CCSD(T) benchmark data.
Pitonák, Michal; Neogrády, Pavel; Cerný, Jirí; Grimme, Stefan; Hobza, Pavel
2009-01-12
Scaled MP3 interaction energies calculated as a sum of MP2/CBS (complete basis set limit) interaction energies and scaled third-order energy contributions obtained in small or medium size basis sets agree very closely with the estimated CCSD(T)/CBS interaction energies for the 22 H-bonded, dispersion-controlled and mixed non-covalent complexes from the S22 data set. Performance of this so-called MP2.5 (third-order scaling factor of 0.5) method has also been tested for 33 nucleic acid base pairs and two stacked conformers of porphine dimer. In all the test cases, performance of the MP2.5 method was shown to be superior to the scaled spin-component MP2 based methods, e.g. SCS-MP2, SCSN-MP2 and SCS(MI)-MP2. In particular, a very balanced treatment of hydrogen-bonded compared to stacked complexes is achieved with MP2.5. The main advantage of the approach is that it employs only a single empirical parameter and is thus biased by two rigorously defined, asymptotically correct ab-initio methods, MP2 and MP3. The method is proposed as an accurate but computationally feasible alternative to CCSD(T) for the computation of the properties of various kinds of non-covalently bound systems.
Falcaro, Milena; Carpenter, James R
2017-06-01
Population-based net survival by tumour stage at diagnosis is a key measure in cancer surveillance. Unfortunately, data on tumour stage are often missing for a non-negligible proportion of patients and the mechanism giving rise to the missingness is usually anything but completely at random. In this setting, restricting analysis to the subset of complete records gives typically biased results. Multiple imputation is a promising practical approach to the issues raised by the missing data, but its use in conjunction with the Pohar-Perme method for estimating net survival has not been formally evaluated. We performed a resampling study using colorectal cancer population-based registry data to evaluate the ability of multiple imputation, used along with the Pohar-Perme method, to deliver unbiased estimates of stage-specific net survival and recover missing stage information. We created 1000 independent data sets, each containing 5000 patients. Stage data were then made missing at random under two scenarios (30% and 50% missingness). Complete records analysis showed substantial bias and poor confidence interval coverage. Across both scenarios our multiple imputation strategy virtually eliminated the bias and greatly improved confidence interval coverage. In the presence of missing stage data complete records analysis often gives severely biased results. We showed that combining multiple imputation with the Pohar-Perme estimator provides a valid practical approach for the estimation of stage-specific colorectal cancer net survival. As usual, when the percentage of missing data is high the results should be interpreted cautiously and sensitivity analyses are recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.
Let them fall where they may: congruence analysis in massive phylogenetically messy data sets.
Leigh, Jessica W; Schliep, Klaus; Lopez, Philippe; Bapteste, Eric
2011-10-01
Interest in congruence in phylogenetic data has largely focused on issues affecting multicellular organisms, and animals in particular, in which the level of incongruence is expected to be relatively low. In addition, assessment methods developed in the past have been designed for reasonably small numbers of loci and scale poorly for larger data sets. However, there are currently over a thousand complete genome sequences available and of interest to evolutionary biologists, and these sequences are predominantly from microbial organisms, whose molecular evolution is much less frequently tree-like than that of multicellular life forms. As such, the level of incongruence in these data is expected to be high. We present a congruence method that accommodates both very large numbers of genes and high degrees of incongruence. Our method uses clustering algorithms to identify subsets of genes based on similarity of phylogenetic signal. It involves only a single phylogenetic analysis per gene, and therefore, computation time scales nearly linearly with the number of genes in the data set. We show that our method performs very well with sets of sequence alignments simulated under a wide variety of conditions. In addition, we present an analysis of core genes of prokaryotes, often assumed to have been largely vertically inherited, in which we identify two highly incongruent classes of genes. This result is consistent with the complexity hypothesis.
Zamani, Ahmad Reza; Motamedi, Narges; Farajzadegan, Ziba
2015-01-01
Background: To have high-quality primary health care services, an adequate doctor–patient communication is necessary. Because of time restrictions and limited budget in health system, an effective, feasible, and continuous training approach is important. The aim of this study is to assess the appropriateness of a communication skills training program simultaneously with routine programs of health care system. Materials and Methods: It was a randomized field trial in two health network settings during 2013. Twenty-eight family physicians through simple random sampling and 140 patients through convenience sampling participated as intervention and control group. The physicians in the intervention group (n = 14) attended six educational sessions, simultaneous organization meeting, with case discussion and peer education method. In both the groups, physicians completed communication skills knowledge and attitude questionnaires, and patients completed patient satisfaction of medical interview questionnaire at baseline, immediately after intervention, and four months postintervention. Physicians and health network administrators (stakeholders), completed a set of program evaluation forms. Descriptive statistics and Chi-square test, t-test, and repeated measure analysis of variance were used to analyze the data. Results: Use of routine program as a strategy of training was rated by stakeholders highly on “feasibility” (80.5%), “acceptability” (93.5%), “educational content and method appropriateness” (80.75%), and “ability to integrating in the health system programs” (approximate 60%). Significant improvements were found in physicians’ knowledge (P < 0.001), attitude (P < 0.001), and patients’ satisfaction (P = 0.002) in intervention group. Conclusions: Communication skills training program, simultaneous organization meeting was successfully implemented and well received by stakeholders, without considering extra time and manpower. Therefore it can be a valuable opportunity toward communication skills training. PMID:27462613
Muthu, S; Ramachandran, G
2014-01-01
The Fourier transform infrared (FT-IR) and FT-Raman of (1R)-N-(Prop-2-yn-1-yl)-2,3-dihydro-1H-inden-1-amine (1RNPDA) were recorded in the regions 4000-400 cm(-1) and 4000-100 cm(-1) respectively. A complete assignment and analysis of the fundamental vibrational modes of the molecule were carried out. The observed fundamental modes have been compared with the harmonic vibrational frequencies computed using HF method by employing 6-31G(d,p) basis set and DFT(B3LYP) method by employing 6-31G(d,p) basis set. The vibrational studies were interpreted in terms of Potential Energy Distribution (PED). The complete vibrational frequency assignments were made by Normal Co-ordinate Analysis (NCA) following the scaled quantum mechanical force field methodology (SQMFF). The first order hyper polarizability (β0) of this molecular system and related properties (α, μ, and Δα) are calculated using B3LYP/6-31G(d,p) method based on the finite-field approach. The thermodynamic functions of the title compound were also performed at the above methods and basis set. A detailed interpretation of the infrared and Raman spectra of 1RNPDA is reported. The (1)H and (13)C nuclear magnetic resonance (NMR) chemical shifts of the molecule were calculated using the GIAO method confirms with the experimental values. Stability of the molecule arising from hyper-conjugative interactions and charge delocalization has been analyzed using Natural Bond Orbital (NBO) analysis. UV-vis spectrum of the compound was recorded and electronic properties such as excitation energies, oscillator strength and wavelength were performed by TD-DFT/B3LYP using 6-31G(d,p) basis set. The HOMO and LUMO energy gap reveals that the energy gap reflects the chemical activity of the molecule. The observed and calculated wave numbers are formed to be in good agreement. The experimental spectra also coincide satisfactorily with those of theoretically constructed spectra. Copyright © 2013 Elsevier B.V. All rights reserved.
Sriram, Ganesh; Shanks, Jacqueline V
2004-04-01
The biosynthetically directed fractional (13)C labeling method for metabolic flux evaluation relies on performing a 2-D [(13)C, (1)H] NMR experiment on extracts from organisms cultured on a uniformly labeled carbon substrate. This article focuses on improvements in the interpretation of data obtained from such an experiment by employing the concept of bondomers. Bondomers take into account the natural abundance of (13)C; therefore many bondomers in a real network are zero, and can be precluded a priori--thus resulting in fewer balances. Using this method, we obtained a set of linear equations which can be solved to obtain analytical formulas for NMR-measurable quantities in terms of fluxes in glycolysis and the pentose phosphate pathways. For a specific case of this network with four degrees of freedom, a priori identifiability of the fluxes was shown possible for any set of fluxes. For a more general case with five degrees of freedom, the fluxes were shown identifiable for a representative set of fluxes. Minimal sets of measurements which best identify the fluxes are listed. Furthermore, we have delineated Boolean function mapping, a new method to iteratively simulate bondomer abundances or efficiently convert carbon skeleton rearrangement information to mapping matrices. The efficiency of this method is expected to be valuable while analyzing metabolic networks which are not completely known (such as in plant metabolism) or while implementing iterative bondomer balancing methods.
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
Generalized index for spatial data sets as a measure of complete spatial randomness
NASA Astrophysics Data System (ADS)
Hackett-Jones, Emily J.; Davies, Kale J.; Binder, Benjamin J.; Landman, Kerry A.
2012-06-01
Spatial data sets, generated from a wide range of physical systems can be analyzed by counting the number of objects in a set of bins. Previous work has been limited to equal-sized bins, which are inappropriate for some domains (e.g., circular). We consider a nonequal size bin configuration whereby overlapping or nonoverlapping bins cover the domain. A generalized index, defined in terms of a variance between bin counts, is developed to indicate whether or not a spatial data set, generated from exclusion or nonexclusion processes, is at the complete spatial randomness (CSR) state. Limiting values of the index are determined. Using examples, we investigate trends in the generalized index as a function of density and compare the results with those using equal size bins. The smallest bin size must be much larger than the mean size of the objects. We can determine whether a spatial data set is at the CSR state or not by comparing the values of a generalized index for different bin configurations—the values will be approximately the same if the data is at the CSR state, while the values will differ if the data set is not at the CSR state. In general, the generalized index is lower than the limiting value of the index, since objects do not have access to the entire region due to blocking by other objects. These methods are applied to two applications: (i) spatial data sets generated from a cellular automata model of cell aggregation in the enteric nervous system and (ii) a known plant data distribution.
Transverse Laplacians for Substitution Tilings
NASA Astrophysics Data System (ADS)
Julien, Antoine; Savinien, Jean
2011-01-01
Pearson and Bellissard recently built a spectral triple - the data of Riemannian noncommutative geometry - for ultrametric Cantor sets. They derived a family of Laplace-Beltrami like operators on those sets. Motivated by the applications to specific examples, we revisit their work for the transversals of tiling spaces, which are particular self-similar Cantor sets. We use Bratteli diagrams to encode the self-similarity, and Cuntz-Krieger algebras to implement it. We show that the abscissa of convergence of the ζ-function of the spectral triple gives indications on the exponent of complexity of the tiling. We determine completely the spectrum of the Laplace-Beltrami operators, give an explicit method of calculation for their eigenvalues, compute their Weyl asymptotics, and a Seeley equivalent for their heat kernels.
2011-01-01
Background Educators in allied health and medical education programs utilize instructional multimedia to facilitate psychomotor skill acquisition in students. This study examines the effects of instructional multimedia on student and instructor attitudes and student study behavior. Methods Subjects consisted of 45 student physical therapists from two universities. Two skill sets were taught during the course of the study. Skill set one consisted of knee examination techniques and skill set two consisted of ankle/foot examination techniques. For each skill set, subjects were randomly assigned to either a control group or an experimental group. The control group was taught with live demonstration of the examination skills, while the experimental group was taught using multimedia. A cross-over design was utilized so that subjects in the control group for skill set one served as the experimental group for skill set two, and vice versa. During the last week of the study, students and instructors completed written questionnaires to assess attitude toward teaching methods, and students answered questions regarding study behavior. Results There were no differences between the two instructional groups in attitudes, but students in the experimental group for skill set two reported greater study time alone compared to other groups. Conclusions Multimedia provides an efficient method to teach psychomotor skills to students entering the health professions. Both students and instructors identified advantages and disadvantages for both instructional techniques. Reponses relative to instructional multimedia emphasized efficiency, processing level, autonomy, and detail of instruction compared to live presentation. Students and instructors identified conflicting views of instructional detail and control of the content. PMID:21693058
Numerical assessment of low-frequency dosimetry from sampled magnetic fields
NASA Astrophysics Data System (ADS)
Freschi, Fabio; Giaccone, Luca; Cirimele, Vincenzo; Canova, Aldo
2018-01-01
Low-frequency dosimetry is commonly assessed by evaluating the electric field in the human body using the scalar potential finite difference method. This method is effective only when the sources of the magnetic field are completely known and the magnetic vector potential can be analytically computed. The aim of the paper is to present a rigorous method to characterize the source term when only the magnetic flux density is available at discrete points, e.g. in case of field measurements. The method is based on the solution of the discrete magnetic curl equation. The system is restricted to the independent set of magnetic fluxes and circulations of magnetic vector potential using the topological information of the computational mesh. The solenoidality of the magnetic flux density is preserved using a divergence-free interpolator based on vector radial basis functions. The analysis of a benchmark problem shows that the complexity of the proposed algorithm is linearly dependent on the number of elements with a controllable accuracy. The method proposed in this paper also proves to be useful and effective when applied to a real world scenario, where the magnetic flux density is measured in proximity of a power transformer. A 8 million voxel body model is then used for the numerical dosimetric analysis. The complete assessment is completed in less than 5 min, that is more than acceptable for these problems.
Numerical assessment of low-frequency dosimetry from sampled magnetic fields.
Freschi, Fabio; Giaccone, Luca; Cirimele, Vincenzo; Canova, Aldo
2017-12-29
Low-frequency dosimetry is commonly assessed by evaluating the electric field in the human body using the scalar potential finite difference method. This method is effective only when the sources of the magnetic field are completely known and the magnetic vector potential can be analytically computed. The aim of the paper is to present a rigorous method to characterize the source term when only the magnetic flux density is available at discrete points, e.g. in case of field measurements. The method is based on the solution of the discrete magnetic curl equation. The system is restricted to the independent set of magnetic fluxes and circulations of magnetic vector potential using the topological information of the computational mesh. The solenoidality of the magnetic flux density is preserved using a divergence-free interpolator based on vector radial basis functions. The analysis of a benchmark problem shows that the complexity of the proposed algorithm is linearly dependent on the number of elements with a controllable accuracy. The method proposed in this paper also proves to be useful and effective when applied to a real world scenario, where the magnetic flux density is measured in proximity of a power transformer. A 8 million voxel body model is then used for the numerical dosimetric analysis. The complete assessment is completed in less than 5 min, that is more than acceptable for these problems.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
A substitution method to improve completeness of events documentation in anesthesia records.
Lamer, Antoine; De Jonckheere, Julien; Marcilly, Romaric; Tavernier, Benoît; Vallet, Benoît; Jeanne, Mathieu; Logier, Régis
2015-12-01
AIMS are optimized to find and display data and curves about one specific intervention but is not retrospective analysis on a huge volume of interventions. Such a system present two main limitation; (1) the transactional database architecture, (2) the completeness of documentation. In order to solve the architectural problem, data warehouses were developed to propose architecture suitable for analysis. However, completeness of documentation stays unsolved. In this paper, we describe a method which allows determining of substitution rules in order to detect missing anesthesia events in an anesthesia record. Our method is based on the principle that missing event could be detected using a substitution one defined as the nearest documented event. As an example, we focused on the automatic detection of the start and the end of anesthesia procedure when these events were not documented by the clinicians. We applied our method on a set of records in order to evaluate; (1) the event detection accuracy, (2) the improvement of valid records. For the year 2010-2012, we obtained event detection with a precision of 0.00 (-2.22; 2.00) min for the start of anesthesia and 0.10 (0.00; 0.35) min for the end of anesthesia. On the other hand, we increased by 21.1% the data completeness (from 80.3 to 97.2% of the total database) for the start and the end of anesthesia events. This method seems to be efficient to replace missing "start and end of anesthesia" events. This method could also be used to replace other missing time events in this particular data warehouse as well as in other kind of data warehouses.
Application of Risk within Net Present Value Calculations for Government Projects
NASA Technical Reports Server (NTRS)
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
Diagnosis: Reasoning from first principles and experiential knowledge
NASA Technical Reports Server (NTRS)
Williams, Linda J. F.; Lawler, Dennis G.
1987-01-01
Completeness, efficiency and autonomy are requirements for suture diagnostic reasoning systems. Methods for automating diagnostic reasoning systems include diagnosis from first principles (i.e., reasoning from a thorough description of structure and behavior) and diagnosis from experiential knowledge (i.e., reasoning from a set of examples obtained from experts). However, implementation of either as a single reasoning method fails to meet these requirements. The approach of combining reasoning from first principles and reasoning from experiential knowledge does address the requirements discussed above and can possibly ease some of the difficulties associated with knowledge acquisition by allowing developers to systematically enumerate a portion of the knowledge necessary to build the diagnosis program. The ability to enumerate knowledge systematically facilitates defining the program's scope, completeness, and competence and assists in bounding, controlling, and guiding the knowledge acquisition process.
Renuga Devi, T S; Sharmi kumar, J; Ramkumaar, G R
2015-02-25
The FTIR and FT-Raman spectra of 2-(cyclohexylamino)ethanesulfonic acid were recorded in the regions 4000-400 cm(-1) and 4000-50 cm(-1) respectively. The structural and spectroscopic data of the molecule in the ground state were calculated using Hartee-Fock and Density functional method (B3LYP) with the correlation consistent-polarized valence double zeta (cc-pVDZ) basis set and 6-311++G(d,p) basis set. The most stable conformer was optimized and the structural and vibrational parameters were determined based on this. The complete assignments were performed based on the Potential Energy Distribution (PED) of the vibrational modes, calculated using Vibrational Energy Distribution Analysis (VEDA) 4 program. With the observed FTIR and FT-Raman data, a complete vibrational assignment and analysis of the fundamental modes of the compound were carried out. Thermodynamic properties and Atomic charges were calculated using both Hartee-Fock and density functional method using the cc-pVDZ basis set and compared. The calculated HOMO-LUMO energy gap revealed that charge transfer occurs within the molecule. (1)H and (13)C NMR chemical shifts of the molecule were calculated using Gauge Including Atomic Orbital (GIAO) method and were compared with experimental results. Stability of the molecule arising from hyperconjugative interactions, charge delocalization have been analyzed using Natural Bond Orbital (NBO) analysis. The first order hyperpolarizability (β) and Molecular Electrostatic Potential (MEP) of the molecule was computed using DFT calculations. The electron density based local reactivity descriptor such as Fukui functions were calculated to explain the chemical reactivity site in the molecule. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
Understanding density functional theory (DFT) and completing it in practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagayoko, Diola
2014-12-15
We review some salient points in the derivation of density functional theory (DFT) and of the local density approximation (LDA) of it. We then articulate an understanding of DFT and LDA that seems to be ignored in the literature. We note the well-established failures of many DFT and LDA calculations to reproduce the measured energy gaps of finite systems and band gaps of semiconductors and insulators. We then illustrate significant differences between the results from self consistent calculations using single trial basis sets and those from computations following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma andmore » Franklin (BZW-EF). Unlike the former, the latter calculations verifiably attain the absolute minima of the occupied energies, as required by DFT. These minima are one of the reasons for the agreement between their results and corresponding, experimental ones for the band gap and a host of other properties. Further, we note predictions of DFT BZW-EF calculations that have been confirmed by experiment. Our subsequent description of the BZW-EF method ends with the application of the Rayleigh theorem in the selection, among the several calculations the method requires, of the one whose results have a full, physics content ascribed to DFT. This application of the Rayleigh theorem adds to or completes DFT, in practice, to preserve the physical content of unoccupied, low energy levels. Discussions, including implications of the method, and a short conclusion follow the description of the method. The successive augmentation of the basis set in the BZW-EF method, needed for the application of the Rayleigh theorem, is also necessary in the search for the absolute minima of the occupied energies, in practice.« less
Strudwick, Gillian; Clark, Carrie; McBride, Brittany; Sakal, Moshe; Kalia, Kamini
2017-09-01
Barcode medication administration systems have been implemented in a number of healthcare settings in an effort to decrease medication errors. To use the technology, nurses are required to login to an electronic health record, scan a medication and a form of patient identification to ensure that these correspond correctly with the ordered medications prior to medication administration. In acute care settings, patient wristbands have been traditionally used as a form of identification; however, past research has suggested that this method of identification may not be preferred in inpatient mental health settings. If barcode medication administration technology is to be effectively used in this context, healthcare organizations need to understand patient preferences with regards to identification methods. The purpose of this study was to elicit patient perceptions of barcode medication administration identification practices in inpatient mental health settings. Insights gathered can be used to determine patient-centered preferences of identifying patients using barcode medication administration technology. Using a qualitative descriptive approach, fifty-two (n=52) inpatient interviews were completed by a Peer Support Worker using a semi-structured interview guide over a period of two months. Interviews were conducted in a number of inpatient mental health areas including forensic, youth, geriatric, acute, and rehabilitation services. An interprofessional team, inclusive of a Peer Support Worker, completed a thematic analysis of the interview data. Six themes emerged as a result of the inductive data analysis. These included: management of information, privacy and security, stigma, relationships, safety and comfort, and negative associations with the technology. Patients also indicated that they would like a choice in the type of identification method used during barcode medication administration. As well, suggestions were made for how barcode medication administration practices could be modified to become more patient-centered. The results of this study have a number of implications for healthcare organizations. As patients indicated that they would like a choice in the type of identification method used during barcode medication administration, healthcare organizations will need to determine how they can facilitate this process. Furthermore, many of the concerns that patients had with barcode medication administration technology could be addressed through patient education. Copyright © 2017 Elsevier B.V. All rights reserved.
Weigold, Arne; Weigold, Ingrid K; Russell, Elizabeth J
2013-03-01
Self-report survey-based data collection is increasingly carried out using the Internet, as opposed to the traditional paper-and-pencil method. However, previous research on the equivalence of these methods has yielded inconsistent findings. This may be due to methodological and statistical issues present in much of the literature, such as nonequivalent samples in different conditions due to recruitment, participant self-selection to conditions, and data collection procedures, as well as incomplete or inappropriate statistical procedures for examining equivalence. We conducted 2 studies examining the equivalence of paper-and-pencil and Internet data collection that accounted for these issues. In both studies, we used measures of personality, social desirability, and computer self-efficacy, and, in Study 2, we used personal growth initiative to assess quantitative equivalence (i.e., mean equivalence), qualitative equivalence (i.e., internal consistency and intercorrelations), and auxiliary equivalence (i.e., response rates, missing data, completion time, and comfort completing questionnaires using paper-and-pencil and the Internet). Study 1 investigated the effects of completing surveys via paper-and-pencil or the Internet in both traditional (i.e., lab) and natural (i.e., take-home) settings. Results indicated equivalence across conditions, except for auxiliary equivalence aspects of missing data and completion time. Study 2 examined mailed paper-and-pencil and Internet surveys without contact between experimenter and participants. Results indicated equivalence between conditions, except for auxiliary equivalence aspects of response rate for providing an address and completion time. Overall, the findings show that paper-and-pencil and Internet data collection methods are generally equivalent, particularly for quantitative and qualitative equivalence, with nonequivalence only for some aspects of auxiliary equivalence. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Dawel, Amy; Wright, Luke; Irons, Jessica; Dumbleton, Rachael; Palermo, Romina; O'Kearney, Richard; McKone, Elinor
2017-08-01
In everyday social interactions, people's facial expressions sometimes reflect genuine emotion (e.g., anger in response to a misbehaving child) and sometimes do not (e.g., smiling for a school photo). There is increasing theoretical interest in this distinction, but little is known about perceived emotion genuineness for existing facial expression databases. We present a new method for rating perceived genuineness using a neutral-midpoint scale (-7 = completely fake; 0 = don't know; +7 = completely genuine) that, unlike previous methods, provides data on both relative and absolute perceptions. Normative ratings from typically developing adults for five emotions (anger, disgust, fear, sadness, and happiness) provide three key contributions. First, the widely used Pictures of Facial Affect (PoFA; i.e., "the Ekman faces") and the Radboud Faces Database (RaFD) are typically perceived as not showing genuine emotion. Also, in the only published set for which the actual emotional states of the displayers are known (via self-report; the McLellan faces), percepts of emotion genuineness often do not match actual emotion genuineness. Second, we provide genuine/fake norms for 558 faces from several sources (PoFA, RaFD, KDEF, Gur, FacePlace, McLellan, News media), including a list of 143 stimuli that are event-elicited (rather than posed) and, congruently, perceived as reflecting genuine emotion. Third, using the norms we develop sets of perceived-as-genuine (from event-elicited sources) and perceived-as-fake (from posed sources) stimuli, matched on sex, viewpoint, eye-gaze direction, and rated intensity. We also outline the many types of research questions that these norms and stimulus sets could be used to answer.
ERIC Educational Resources Information Center
Eaton, Danice K.; Brener, Nancy D.; Kann, Laura; Denniston, Maxine M.; McManus, Tim; Kyle, Tonja M.; Roberts, Alice M.; Flint, Katherine H.; Ross, James G.
2010-01-01
The authors examined whether paper-and-pencil and Web surveys administered in the school setting yield equivalent risk behavior prevalence estimates. Data were from a methods study conducted by the Centers for Disease Control and Prevention (CDC) in spring 2008. Intact classes of 9th- or 10th-grade students were assigned randomly to complete a…
Nonparametric Conditional Estimation
1987-02-01
the data because the statistician has complete control over the method. It is especially reasonable when there is a bone fide loss function to which...For example, the sample mean is m(Fn). Most calculations that statisticians perform on a set of data can be expressed as statistical functionals on...of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering
SWCC Prediction: Seep/W Add-In Functions
2017-11-01
acquire this information is to investigate from which soil data set the predictive method was derived. ERDC/GSL SR-17-4 rev. 38 References...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send
Constraints for the Trifocal Tensor
NASA Astrophysics Data System (ADS)
Alzati, Alberto; Tortora, Alfonso
In this chapter we give an account of two different methods to find constraints for the trifocal tensor Т, used in geometric computer vision. We also show how to single out a set of only eight equations that are generically complete, i.e. for a generic choice of Т, they suffice to decide whether Т is indeed trifocal. Note that eight is minimum possible number of constraints.
Numerical Solution for Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Warsi, Z. U. A.; Weed, R. A.; Thompson, J. F.
1982-01-01
Carefully selected blend of computational techniques solves complete set of equations for viscous, unsteady, hypersonic flow in general curvilinear coordinates. New algorithm has tested computation of axially directed flow about blunt body having shape similar to that of such practical bodies as wide-body aircraft or artillery shells. Method offers significant computational advantages because of conservation-law form of equations and because it reduces amount of metric data required.
Clarke, B.; O’Brien, A.; Hammond, A.; Ryan, S.; Kay, L.; Richards, P.; Almeida, C.
2008-01-01
Objectives. Rheumatological conditions are common, thus nurses (Ns) occupational therapists (OTs) and physiotherapists (PTs) require at least basic rheumatology knowledge upon qualifying. The aim of this study was to develop a core set of teaching topics and potential ways of delivering them. Methods. A modified Delphi technique was used for clinicians to develop preliminary core sets of teaching topics for each profession. Telephone interviews with educationalists explored their views on these, and challenges and solutions for delivering them. Inter-professional workshops enabled clinicians and educationalists to finalize the core set together, and generate methods for delivery. Results. Thirty-nine rheumatology clinicians (12N, 14OT, 13PT) completed the Delphi consensus, proposing three preliminary core sets (N71 items, OT29, PT26). Nineteen educationalists (6N, 7OT, 6PT) participated in telephone interviews, raising concerns about disease-specific vs generic teaching and proposing many methods for delivery. Three inter-professional workshops involved 34 participants (clinicians: N12, OT9, PT5; educationalists: N2, OT3, PT2; Patient 1) who reached consensus on a single core set comprising six teaching units: Anatomy and Physiology; Assessment; Management and Intervention; Psychosocial Issues; Patient Education; and the Multi-disciplinary Team, recommending some topics within the units receive greater depth for some professions. An innovative range of delivery options was generated plus two brief interventions: a Rheumatology Chat Show and a Rheumatology Road Show. Conclusions. Working together, clinicians and educationalists proposed a realistic core set of rheumatology topics for undergraduate health professionals. They proposed innovative delivery methods, with collaboration between educationalists, clinicians and patients strongly recommended. These potential interventions need testing. PMID:18443005
NASA Astrophysics Data System (ADS)
Brinkkemper, S.; Rossi, M.
1994-12-01
As customizable computer aided software engineering (CASE) tools, or CASE shells, have been introduced in academia and industry, there has been a growing interest into the systematic construction of methods and their support environments, i.e. method engineering. To aid the method developers and method selectors in their tasks, we propose two sets of metrics, which measure the complexity of diagrammatic specification techniques on the one hand, and of complete systems development methods on the other hand. Proposed metrics provide a relatively fast and simple way to analyze the technique (or method) properties, and when accompanied with other selection criteria, can be used for estimating the cost of learning the technique and the relative complexity of a technique compared to others. To demonstrate the applicability of the proposed metrics, we have applied them to 34 techniques and 15 methods.
The Study of Imperfection in Rough Set on the Field of Engineering and Education
NASA Astrophysics Data System (ADS)
Sheu, Tian-Wei; Liang, Jung-Chin; You, Mei-Li; Wen, Kun-Li
Based on the characteristic of rough set, rough set theory overlaps with many other theories, especially with fuzzy set theory, evidence theory and Boolean reasoning methods. And the rough set methodology has found many real-life applications, such as medical data analysis, finance, banking, engineering, voice recognition, image processing and others. Till now, there is rare research associating to this issue in the imperfection of rough set. Hence, the main purpose of this paper is to study the imperfection of rough set in the field of engineering and education. First of all, we preview the mathematics model of rough set, and a given two examples to enhance our approach, which one is the weighting of influence factor in muzzle noise suppressor, and the other is the weighting of evaluation factor in English learning. Third, we also apply Matlab to develop a complete human-machine interface type of toolbox in order to support the complex calculation and verification the huge data. Finally, some further suggestions are indicated for the research in the future.
Generalized Gaussian wave packet dynamics: Integrable and chaotic systems.
Pal, Harinder; Vyas, Manan; Tomsovic, Steven
2016-01-01
The ultimate semiclassical wave packet propagation technique is a complex, time-dependent Wentzel-Kramers-Brillouin method known as generalized Gaussian wave packet dynamics (GGWPD). It requires overcoming many technical difficulties in order to be carried out fully in practice. In its place roughly twenty years ago, linearized wave packet dynamics was generalized to methods that include sets of off-center, real trajectories for both classically integrable and chaotic dynamical systems that completely capture the dynamical transport. The connections between those methods and GGWPD are developed in a way that enables a far more practical implementation of GGWPD. The generally complex saddle-point trajectories at its foundation are found using a multidimensional Newton-Raphson root search method that begins with the set of off-center, real trajectories. This is possible because there is a one-to-one correspondence. The neighboring trajectories associated with each off-center, real trajectory form a path that crosses a unique saddle; there are exceptions that are straightforward to identify. The method is applied to the kicked rotor to demonstrate the accuracy improvement as a function of ℏ that comes with using the saddle-point trajectories.
An ab initio study of the C3(+) cation using multireference methods
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Martin, J. M. L.; Francois, J. P.; Gijbels, R.
1991-01-01
The energy difference between the linear 2 sigma(sup +, sub u) and cyclic 2B(sub 2) structures of C3(+) has been investigated using large (5s3p2d1f) basis sets and multireference electron correlation treatments, including complete active space self consistent fields (CASSCF), multireference configuration interaction (MRCI), and averaged coupled-pair functional (ACPF) methods, as well as the single-reference quadratic configuration interaction (QCISD(T)) method. Our best estimate, including a correction for basis set incompleteness, is that the linear form lies above the cyclic from by 5.2(+1.5 to -1.0) kcal/mol. The 2 sigma(sup +, sub u) state is probably not a transition state, but a local minimum. Reliable computation of the cyclic/linear energy difference in C3(+) is extremely demanding of the electron correlation treatment used: of the single-reference methods previously considered, CCSD(T) and QCISD(T) perform best. The MRCI + Q(0.01)/(4s2p1d) energy separation of 1.68 kcal/mol should provide a comparison standard for other electron correlation methods applied to this system.
Evaluation of three indices for biofilm accumulation on complete dentures.
Paranhos, Helena de Freitas Oliveira; Lovato da Silva, Claudia Helena; de Souza, Raphael Freitas; Pontes, Karina Matthes de Freitas
2010-03-01
The objective of this study was to evaluate the accuracy and reproducibility of three complete denture biofilm indices (Prosthesis Hygiene Index; Jeganathan et al. Index; Budtz-Jørgensen Index) by means of a computerised comparison method. Clinical studies into denture hygiene have employed a large number of biofilm indices among their outcome variables. However, the knowledge about the validity of these indices is still scarce. Sixty-two complete denture wearers were selected. The internal surfaces of the upper complete dentures were stained (5% erythrosine) and photographed. The slides were projected on paper, and the biofilm indices were applied over the photos by means of a scoring method. For the computerised method, the areas (total and biofilm-covered) were measured by dedicated software (Image Tool). In addition, to compare the results of the computerised method and Prosthetic Hygiene Index, a new scoring scale (including four and five graded) was introduced. For the Jeganathan et al. and Budtz-Jørgensen indices, the original scales were used. Values for each index were compared with the computerised method by the Friedman test. Their reproducibility was measured by means of weighed kappa. Significance for both tests was set at 0.05. The indices tested provided similar mean measures but they tended to overestimate biofilm coverage when compared with the computerised method (p < 0.001). Agreement between the Prosthesis Hygiene Index and the computerised method was not significant, regardless of the scale used. Jeghanathan et al. Index showed weak agreement, and consistent results were found for Budtz-Jorgensen Index (kappa = 0.19 and 0.39 respectively). Assessment of accuracy for the biofilm indices showed instrument bias that was similar among the tested methods. Weak inter-instrument reproducibility was found for the indices, except for the Budtz-Jørgensen Index. This should be the method of choice for clinical studies when more sophisticated approaches are not possible.
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
Semiautomated tremor detection using a combined cross-correlation and neural network approach
Horstmann, Tobias; Harrington, Rebecca M.; Cochran, Elizabeth S.
2013-01-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low‒amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross‒correlation technique, followed by a Self‒Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being “semiautomated”. We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal‒to‒noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal‒to‒noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
Semiautomated tremor detection using a combined cross-correlation and neural network approach
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E. S.
2013-09-01
Despite observations of tectonic tremor in many locations around the globe, the emergent phase arrivals, low-amplitude waveforms, and variable event durations make automatic detection a nontrivial task. In this study, we employ a new method to identify tremor in large data sets using a semiautomated technique. The method first reduces the data volume with an envelope cross-correlation technique, followed by a Self-Organizing Map (SOM) algorithm to identify and classify event types. The method detects tremor in an automated fashion after calibrating for a specific data set, hence we refer to it as being "semiautomated". We apply the semiautomated detection algorithm to a newly acquired data set of waveforms from a temporary deployment of 13 seismometers near Cholame, California, from May 2010 to July 2011. We manually identify tremor events in a 3 week long test data set and compare to the SOM output and find a detection accuracy of 79.5%. Detection accuracy improves with increasing signal-to-noise ratios and number of available stations. We find detection completeness of 96% for tremor events with signal-to-noise ratios above 3 and optimal results when data from at least 10 stations are available. We compare the SOM algorithm to the envelope correlation method of Wech and Creager and find the SOM performs significantly better, at least for the data set examined here. Using the SOM algorithm, we detect 2606 tremor events with a cumulative signal duration of nearly 55 h during the 13 month deployment. Overall, the SOM algorithm is shown to be a flexible new method that utilizes characteristics of the waveforms to identify tremor from noise or other seismic signals.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritschl, Ludwig; Fleischmann, Christof; Kuntz, Jan, E-mail: j.kuntz@dkfz.de
Purpose: In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled datamore » set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. Methods: The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. Results: The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. Conclusions: The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.« less
Audit activity and quality of completed audit projects in primary care in Staffordshire.
Chambers, R; Bowyer, S; Campbell, I
1995-01-01
OBJECTIVES--To survey audit activity in primary care and determine which practice factors are associated with completed audit; to survey the quality of completed audit projects. DESIGN--From April 1992 to June 1993 a team from the medical audit advisory group visited all general practices; a research assistant visited each practice to study the best audit project. Data were collected in structured interviews. SETTING--Staffordshire, United Kingdom. SUBJECTS--All 189 general practices. MAIN MEASURES--Audit activity using Oxford classification system. Quality of best audit project by assessing choice of topic; participation of practice staff; setting of standards; methods of data collection and presentation of results; whether a plan to make changes resulted from the audit; and whether changes led to the set standards being achieved. RESULTS--Audit information was available from 169 practices (89%). 44(26%) practices had carried out at least one full audit; 40(24%) had not started audit. Mean scores with the Oxford classification system were significantly higher with the presence of a practice manager (2.7(95% confidence interval 2.4 to 2.9) v 1.2(0.7 to 1.8), p < 0.0001) and with computerisation (2.8(2.5 to 3.1) v 1.4 (0.9 to 2.0), p < 0.0001), organised notes (2.6(2.1 to 3.0) v 1.7(7.2 to 2.2), p = 0.03), being a training practice (3.5(3.2 to 3.8) v 2.1(1.8 to 2.4), p < 0.0001), and being a partnership (2.8(2.6 to 3.0) v 1.5(1.1 to 2.0), p < 0.0001). Standards had been set in 62 of the 71 projects reviewed. Data were collected prospectively in 36 projects and retrospectively in 35. 16 projects entailed taking samples from a study population and 55 from the whole population. 50 projects had a written summary. Performance was less than the standards set or expected in 56 projects. 62 practices made changes as a result of the audit. 35 of the 53 that had reviewed the changes found that the original standards had been reached. CONCLUSIONS--Evaluation of audit in primary care should include evaluation of the methods used, whether deficiencies were identified, and whether changes were implemented to resolve any problems found. PMID:10153426
NASA Technical Reports Server (NTRS)
Chen, J. C.; Garba, J. A.; Wada, B. K.
1978-01-01
In the design/analysis process of a payload structural system, the accelerations at the payload/launch vehicle interface obtained from a system analysis using a rigid payload are often used as the input forcing function to the elastic payload to obtain structural design loads. Such an analysis is at best an approximation since the elastic coupling effects are neglected. This paper develops a method wherein the launch vehicle/rigid payload interface accelerations are modified to account for the payload elasticity. The advantage of the proposed method, which is exact to the extent that the physical system can be described by a truncated set of generalized coordinates, is that the complete design/analysis process can be performed within the organization responsible for the payload design. The method requires the updating of the system normal modes to account for payload changes, but does not require a complete transient solution using the composite system model. An application to a real complex structure, the Viking Spacecraft System, is given.
Ge, Chongtao; Rymut, Susan; Lee, Cheonghoon; Lee, Jiyoung
2014-05-01
Mung bean sprouts, typically consumed raw or minimally cooked, are often contaminated with pathogens. Internalized pathogens pose a high risk because conventional sanitization methods are ineffective for their inactivation. The studies were performed (i) to understand the potential of internalization of Salmonella in mung bean sprouts under conditions where the irrigation water was contaminated and (ii) to determine if pre- and postharvest intervention methods are effective in inactivating the internalized pathogen. Mung bean sprouts were grown hydroponically and exposed to green fluorescence protein-tagged Salmonella Typhimurium through maturity. One experimental set received contaminated water daily, while other sets received contaminated water on a single day at different times. For preharvest intervention, irrigation water was exposed to UV, and for postharvest intervention-contaminated sprouts were subjected to a chlorine wash and UV light. Harvested samples were disinfected with ethanol and AgNO3 to differentiate surface-associate pathogens from the internalized ones. The internalized Salmonella Typhimurium in each set was quantified using the plate count method. Internalized Salmonella Typhimurium was detected at levels of 2.0 to 5.1 log CFU/g under all conditions. Continuous exposure to contaminated water during the entire period generated significantly higher levels of Salmonella Typhimurium internalization than sets receiving contaminated water for only a single day (P < 0.05). Preintervention methods lowered the level of internalized Salmonella by 1.84 log CFU/g (P < 0.05), whereas postintervention methods were ineffective in eliminating internalized pathogens. Preintervention did not completely inactivate bacteria in sprouts and demonstrated that the remaining Salmonella Typhimurium in water became more resistant to UV. Because postharvest intervention methods are ineffective, proper procedures for maintaining clean irrigation water must be followed throughout production in a hydroponic system.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-07-19
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.
A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery
Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun
2016-01-01
Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631
NASA Astrophysics Data System (ADS)
Zirari, M.; Abdellah El-Hadj, A.; Bacha, N.
2010-03-01
A finite element method is used to simulate the deposition of the thermal spray coating process. A set of governing equations is solving by a volume of fluid method. For the solidification phenomenon, we use the specific heat method (SHM). We begin by comparing the present model with experimental and numerical model available in the literature. In this study, completely molten or semi-molten aluminum particle impacts a H13 tool steel substrate is considered. Next we investigate the effect of inclination of impact of a partially molten particle on flat substrate. It was found that the melting state of the particle has great effects on the morphologies of the splat.
An auxiliary-field quantum Monte Carlo study of the chromium dimer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purwanto, Wirawan, E-mail: wirawan0@gmail.com; Zhang, Shiwei; Krakauer, Henry
2015-02-14
The chromium dimer (Cr{sub 2}) presents an outstanding challenge for many-body electronic structure methods. Its complicated nature of binding, with a formal sextuple bond and an unusual potential energy curve (PEC), is emblematic of the competing tendencies and delicate balance found in many strongly correlated materials. We present an accurate calculation of the PEC and ground state properties of Cr{sub 2}, using the auxiliary-field quantum Monte Carlo (AFQMC) method. Unconstrained, exact AFQMC calculations are first carried out for a medium-sized but realistic basis set. Elimination of the remaining finite-basis errors and extrapolation to the complete basis set limit are thenmore » achieved with a combination of phaseless and exact AFQMC calculations. Final results for the PEC and spectroscopic constants are in excellent agreement with experiment.« less
Efficient solution of ordinary differential equations modeling electrical activity in cardiac cells.
Sundnes, J; Lines, G T; Tveito, A
2001-08-01
The contraction of the heart is preceded and caused by a cellular electro-chemical reaction, causing an electrical field to be generated. Performing realistic computer simulations of this process involves solving a set of partial differential equations, as well as a large number of ordinary differential equations (ODEs) characterizing the reactive behavior of the cardiac tissue. Experiments have shown that the solution of the ODEs contribute significantly to the total work of a simulation, and there is thus a strong need to utilize efficient solution methods for this part of the problem. This paper presents how an efficient implicit Runge-Kutta method may be adapted to solve a complicated cardiac cell model consisting of 31 ODEs, and how this solver may be coupled to a set of PDE solvers to provide complete simulations of the electrical activity.
A Geometrical-Statistical Approach to Outlier Removal for TDOA Measurements
NASA Astrophysics Data System (ADS)
Compagnoni, Marco; Pini, Alessia; Canclini, Antonio; Bestagini, Paolo; Antonacci, Fabio; Tubaro, Stefano; Sarti, Augusto
2017-08-01
The curse of outlier measurements in estimation problems is a well known issue in a variety of fields. Therefore, outlier removal procedures, which enables the identification of spurious measurements within a set, have been developed for many different scenarios and applications. In this paper, we propose a statistically motivated outlier removal algorithm for time differences of arrival (TDOAs), or equivalently range differences (RD), acquired at sensor arrays. The method exploits the TDOA-space formalism and works by only knowing relative sensor positions. As the proposed method is completely independent from the application for which measurements are used, it can be reliably used to identify outliers within a set of TDOA/RD measurements in different fields (e.g. acoustic source localization, sensor synchronization, radar, remote sensing, etc.). The proposed outlier removal algorithm is validated by means of synthetic simulations and real experiments.
Li, Wentao; Yuan, Jiuchuang; Yuan, Meiling; Zhang, Yong; Yao, Minghai; Sun, Zhigang
2018-01-03
A new global potential energy surface (PES) of the O + + H 2 system was constructed with the permutation invariant polynomial neural network method, using about 63 000 ab initio points, which were calculated by employing the multi-reference configuration interaction method with aug-cc-pVTZ and aug-cc-pVQZ basis sets. For improving the accuracy of the PES, the basis set was extrapolated to the complete basis set limit by the two-point extrapolation method. The root mean square error of fitting was only 5.28 × 10 -3 eV. The spectroscopic constants of the diatomic molecules were calculated and compared with previous theoretical and experimental results, which suggests that the present results agree well with the experiment. On the newly constructed PES, reaction dynamics studies were performed using the time-dependent wave packet method. The calculated integral cross sections (ICSs) were compared with the available theoretical and experimental results, where a good agreement with the experimental data was seen. Significant forward and backward scatterings were observed in the whole collision energy region studied. At the same time, the differential cross sections biased the forward scattering, especially at higher collision energies.
Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle
2011-01-17
Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.
Quantum phase space with a basis of Wannier functions
NASA Astrophysics Data System (ADS)
Fang, Yuan; Wu, Fan; Wu, Biao
2018-02-01
A quantum phase space with Wannier basis is constructed: (i) classical phase space is divided into Planck cells; (ii) a complete set of Wannier functions are constructed with the combination of Kohn’s method and Löwdin method such that each Wannier function is localized at a Planck cell. With these Wannier functions one can map a wave function unitarily onto phase space. Various examples are used to illustrate our method and compare it to Wigner function. The advantage of our method is that it can smooth out the oscillations in wave functions without losing any information and is potentially a better tool in studying quantum-classical correspondence. In addition, we point out that our method can be used for time-frequency analysis of signals.
Current Climate Data Set Documentation Standards: Somewhere between Anagrams and Full Disclosure
NASA Astrophysics Data System (ADS)
Fleig, A. J.
2008-12-01
In the 17th century scientists, concerned with establishing primacy for their discoveries while maintaining control of their intellectual property, often published their results as anagrams. Robert Hooke's initial publication in 1676 of his law of elasticity in the form ceiiinossttuv which he revealed two years later as "Ut tension sic vis" or "of the extension, so the force" is one of the better known examples although Galileo, Newton, and many others used the same approach. Fortunately the idea of open publication in scientific journals subject to peer review as a cornerstone of the scientific method gradually became established and is now the norm. Unfortunately though even peer reviewed publication does not necessarily lead to full disclosure. One example of this occurs in the production, review and distribution of large scale data sets of climate variables. Validation papers describe how the data was made in concept but do not provide adequate documentation of the process. Complete provenance of the resulting data sets including description of the exact input files, processing environment, and actual processing code are not required as part of the production and archival effort. A user of the data may be assured by the publication and peer review that the data is considered to be good and usable for scientific investigation but will not know exactly how the data set was made. The problem with this lack of knowledge may be most apparent when considering questions of climate change. Future measurements of the same geophysical parameter will surely be derived from a different observational system than the one used in creating today's data sets. An obvious task in assessing change between the present and the future data set will be to determine how much of the change is because the parameter changed and how much is because the measurement system changed. This will be hard to do without complete knowledge of how the predecessor data set was made. Automated techniques are being developed that will simplify the creation of much of the provenance information but there are both cultural and infrastructure problems that discourage provision of complete documentation. It is time to reconsider what the standards for production and documentation of data sets should be. There is only a short window before the loss of knowledge about current data sets associated with human mortality becomes irreversible. .
Calculations for energies, transition rates, and lifetimes in Al-like Kr XXIV
NASA Astrophysics Data System (ADS)
Zhang, C. Y.; Si, R.; Liu, Y. W.; Yao, K.; Wang, K.; Guo, X. L.; Li, S.; Chen, C. Y.
2018-05-01
Using the second-order many-body perturbation theory (MBPT) method, a complete and accurate data set of excitation energies, lifetimes, wavelengths, and electric dipole (E1), magnetic dipole (M1), electric quadrupole (E2), and magnetic quadrupole (M2) line strengths, transition rates, and oscillator strengths for the lowest 880 levels arising from the 3l3 (0 ≤ l ≤ 2), 3l2 4l‧ (0 ≤ l ≤ 2, 0 ≤l‧ ≤ 3), 3s2 5 l (0 ≤ l ≤ 4), 3p2 5 l (0 ≤ l ≤ 1), and 3s3p5 l (0 ≤ l ≤ 4) configurations in Al-like Kr XXIV is provided. Comparisons are made with available experimental and theoretical results. Our calculated energies are expected to be accurate enough to facilitate identifications of observed lines involving the n = 4 , 5 levels. The complete data set is also useful for modeling and diagnosing fusion plasma.
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Three-dimensional baroclinic instability of a Hadley cell for small Richardson number
NASA Technical Reports Server (NTRS)
Antar, B. N.; Fowlis, W. W.
1985-01-01
A three-dimensional, linear stability analysis of a baroclinic flow for Richardson number, Ri, of order unity is presented. The model considered is a thin horizontal, rotating fluid layer which is subjected to horizontal and vertical temperature gradients. The basic state is a Hadley cell which is a solution of the complete set of governing, nonlinear equations and contains both Ekman and thermal boundary layers adjacent to the rigid boundaries; it is given in a closed form. The stability analysis is also based on the complete set of equations; and perturbation possessing zonal, meridional, and vertical structures were considered. Numerical methods were developed for the stability problem which results in a stiff, eighth-order, ordinary differential eigenvalue problem. The previous work on three-dimensional baroclinic instability for small Ri was extended to a more realistic model involving the Prandtl number, sigma, and the Ekman number, E, and to finite growth rates and a wider range of the zonal wavenumber.
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Symplectic analysis of three-dimensional Abelian topological gravity
NASA Astrophysics Data System (ADS)
Cartas-Fuentevilla, R.; Escalante, Alberto; Herrera-Aguilar, Alfredo
2017-02-01
A detailed Faddeev-Jackiw quantization of an Abelian topological gravity is performed; we show that this formalism is equivalent and more economical than Dirac's method. In particular, we identify the complete set of constraints of the theory, from which the number of physical degrees of freedom is explicitly computed. We prove that the generalized Faddeev-Jackiw brackets and the Dirac ones coincide with each other. Moreover, we perform the Faddeev-Jackiw analysis of the theory at the chiral point, and the full set of constraints and the generalized Faddeev-Jackiw brackets are constructed. Finally we compare our results with those found in the literature and we discuss some remarks and prospects.
Impact of magnitude uncertainties on seismic catalogue properties
NASA Astrophysics Data System (ADS)
Leptokaropoulos, K. M.; Adamaki, A. K.; Roberts, R. G.; Gkarlaouni, C. G.; Paradisopoulou, P. M.
2018-05-01
Catalogue-based studies are of central importance in seismological research, to investigate the temporal, spatial and size distribution of earthquakes in specified study areas. Methods for estimating the fundamental catalogue parameters like the Gutenberg-Richter (G-R) b-value and the completeness magnitude (Mc) are well established and routinely applied. However, the magnitudes reported in seismicity catalogues contain measurement uncertainties which may significantly distort the estimation of the derived parameters. In this study, we use numerical simulations of synthetic data sets to assess the reliability of different methods for determining b-value and Mc, assuming the G-R law validity. After contaminating the synthetic catalogues with Gaussian noise (with selected standard deviations), the analysis is performed for numerous data sets of different sample size (N). The noise introduced to the data generally leads to a systematic overestimation of magnitudes close to and above Mc. This fact causes an increase of the average number of events above Mc, which in turn leads to an apparent decrease of the b-value. This may result to a significant overestimation of seismicity rate even well above the actual completeness level. The b-value can in general be reliably estimated even for relatively small data sets (N < 1000) when only magnitudes higher than the actual completeness level are used. Nevertheless, a correction of the total number of events belonging in each magnitude class (i.e. 0.1 unit) should be considered, to deal with the magnitude uncertainty effect. Because magnitude uncertainties (here with the form of Gaussian noise) are inevitable in all instrumental catalogues, this finding is fundamental for seismicity rate and seismic hazard assessment analyses. Also important is that for some data analyses significant bias cannot necessarily be avoided by choosing a high Mc value for analysis. In such cases, there may be a risk of severe miscalculation of seismicity rate regardless the selected magnitude threshold, unless possible bias is properly assessed.
A reanalysis of the indirect evidence for recombination in human mitochondrial DNA.
Piganeau, G; Eyre-Walker, A
2004-04-01
In an attempt to resolve the controversy about whether recombination occurs in human mtDNA, we have analysed three recently published data sets of complete mtDNA sequences along with 10 RFLP data sets. We have analysed the relationship between linkage disequilibrium (LD) and distance between sites under a variety of conditions using two measures of LD, r2 and /D'/. We find that there is a negative correlation between r2 and distance in the majority of data sets, but no overall trend for /D'/. Five out of six mtDNA sequence data sets show an excess of homoplasy, but this could be due to either recombination or hypervariable sites. Two additional recombination detection methods used, Geneconv and Maximum Chi-Square, showed nonsignificant results. The overall significance of these findings is hard to quantify because of nonindependence, but our results suggest a lack of evidence for recombination in human mtDNA.
QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.
Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V
2015-07-27
Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.
A method for monitoring intensity during aquatic resistance exercises.
Colado, Juan C; Tella, Victor; Triplett, N Travis
2008-11-01
The aims of this study were (i) to check whether monitoring of both the rhythm of execution and the perceived effort is a valid tool for reproducing the same intensity of effort in different sets of the same aquatic resistance exercise (ARE) and (ii) to assess whether this method allows the ARE to be put at the same intensity level as its equivalent carried out on dry land. Four healthy trained young men performed horizontal shoulder abduction and adduction (HSAb/Ad) movements in water and on dry land. Muscle activation was recorded using surface electromyography of 1 stabilizer and several agonist muscles. Before the final tests, the ARE movement cadence was established individually following a rhythmic digitalized sequence of beats to define the alternate HSAb/Ad movements. This cadence allowed the subject to perform 15 repetitions at a perceived exertion of 9-10 using Hydro-Tone Bells. After that, each subject performed 2 nonconsecutive ARE sets. The dry land exercises (1 set of HSAb and 1 set of HSAd) were performed using a dual adjustable pulley cable motion machine, with the previous selection of weights that allowed the same movement cadence to be maintained and the completion of the same repetitions in each of the sets as with the ARE. The average normalized data were compared for the exercises in order to determine possible differences in muscle activity. The results show the validity of this method for reproducing the intensity of effort in different sets of the same ARE, but is not valid for matching the same intensity level as kinematically similar land-based exercises.
Code of Federal Regulations, 2010 CFR
2010-04-01
... TREASURY LIQUORS LABELING AND ADVERTISING OF MALT BEVERAGES Labeling Requirements for Malt Beverages § 7.20... permission to relabel shall be accompanied by two complete sets of the old labels and two complete sets of...
Information filtering in evolving online networks
NASA Astrophysics Data System (ADS)
Chen, Bo-Lun; Li, Fen-Fen; Zhang, Yong-Jun; Ma, Jia-Lin
2018-02-01
Recommender systems use the records of users' activities and profiles of both users and products to predict users' preferences in the future. Considerable works towards recommendation algorithms have been published to solve the problems such as accuracy, diversity, congestion, cold-start, novelty, coverage and so on. However, most of these research did not consider the temporal effects of the information included in the users' historical data. For example, the segmentation of the training set and test set was completely random, which was entirely different from the real scenario in recommender systems. More seriously, all the objects are treated as the same, regardless of the new, the popular or obsoleted products, so do the users. These data processing methods always lose useful information and mislead the understanding of the system's state. In this paper, we detailed analyzed the difference of the network structure between the traditional random division method and the temporal division method on two benchmark data sets, Netflix and MovieLens. Then three classical recommendation algorithms, Global Ranking method, Collaborative Filtering and Mass Diffusion method, were employed. The results show that all these algorithms became worse in all four key indicators, ranking score, precision, popularity and diversity, in the temporal scenario. Finally, we design a new recommendation algorithm based on both users' and objects' first appearance time in the system. Experimental results showed that the new algorithm can greatly improve the accuracy and other metrics.
NASA Astrophysics Data System (ADS)
Ganesan, A.; Alakhras, M.; Brennan, P. C.; Lee, W.; Tapia, K.; Mello-Thoms, C.
2018-03-01
Purpose: To determine the impact of Breast Screen Reader Assessment Strategy (BREAST) over time in improving radiologists' breast cancer detection performance, and to identify the group of radiologists that benefit the most by using BREAST as a training tool. Materials and Methods: Thirty-six radiologists who completed three case-sets offered by BREAST were included in this study. The case-sets were arranged in radiologists' chronological order of completion and five performance measures (sensitivity, specificity, location sensitivity, receiver operating characteristics area under the curve (ROC AUC) and jackknife alternative free-response receiver operating characteristic (JAFROC) figure-of-merit (FOM)), available from BREAST, were compared between case-sets to determine the level of improvement achieved. The radiologists were then grouped based on their characteristics and the above performance measures between the case-sets were compared. Paired t-tests or Wilcoxon signed-rank tests with statistical significance set at p < 0.05 were used to compare the performance measures. Results: Significant improvement was demonstrated in radiologists' case-set performance in terms of location sensitivity and JAFROC FOM over the years, and radiologists' location sensitivity and JAFROC FOM showed significant improvement irrespective of their characteristics. In terms of ROC AUC, significant improvement was shown for radiologists who were reading screen mammograms for more than 7 years and spent more than 9 hours per week reading mammograms. Conclusion: Engaging with case-sets appears to enhance radiologists' performance suggesting the important value of initiatives such as BREAST. However, such performance enhancement was not shown for everyone, highlighting the need to tailor the BREAST platform to benefit all radiologists.
In vitro osteogenic/dentinogenic potential of an experimental calcium aluminosilicate cement
Eid, Ashraf A.; Niu, Li-na; Primus, Carolyn M.; Opperman, Lynne A.; Watanabe, Ikuya; Pashley, David H.; Tay, Franklin R.
2013-01-01
Introduction Calcium aluminosilicate cements are fast-setting, acid-resistant, bioactive cements that may be used as root-repair materials. This study examined the osteogenic/dentinogenic potential of an experimental calcium aluminosilicate cement (Quick-Set) using a murine odontoblast-like cell model. Methods Quick-Set and white ProRoot MTA (WMTA) were mixed with the proprietary gel or deionized water, allowed to set completely in 100% relative humidity and aged in complete growth medium for 2 weeks until rendered non-cytotoxic. Similarly-aged Teflon discs were used as negative control. The MDPC-23 cell-line was used for evaluating changes in mRNA expressions of genes associated with osteogenic/dentinogenic differentiation and mineralization (qRT-PCR) alkaline phosphatase enzyme production and extracellular matrix mineralization (Alizarin red-S staining). Results After MDPC-23 cells were incubated with the materials in osteogenic differentiation medium for 1 week, both cements showed upregulation in ALP and DSPP expression. Fold increases in these two genes were not significantly different between Quick-Set and WMTA. Both cements showed no statistically significant upregulation/downregulation in RUNX2, OCN, BSP and DMP1 gene expression compared with Teflon. Alkaline phosphatase activity of cells cultured on Quick-Set and WMTA were not significantly different at 1 week or 2 weeks, but were significantly higher (p<0.05) than Teflon in both weeks. Both cements showed significantly higher calcium deposition compared with Teflon after 3 weeks of incubation in mineralizing medium (p<0.001). Differences between Quick-Set and WMTA were not statistically significant. Conclusions The experimental calcium aluminosilicate cement exhibits similar osteogenic/dentinogenic properties to WMTA and may be a potential substitute for commercially-available tricalcium silicate cements. PMID:23953291
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Computation of the Genetic Code
NASA Astrophysics Data System (ADS)
Kozlov, Nicolay N.; Kozlova, Olga N.
2018-03-01
One of the problems in the development of mathematical theory of the genetic code (summary is presented in [1], the detailed -to [2]) is the problem of the calculation of the genetic code. Similar problems in the world is unknown and could be delivered only in the 21st century. One approach to solving this problem is devoted to this work. For the first time provides a detailed description of the method of calculation of the genetic code, the idea of which was first published earlier [3]), and the choice of one of the most important sets for the calculation was based on an article [4]. Such a set of amino acid corresponds to a complete set of representations of the plurality of overlapping triple gene belonging to the same DNA strand. A separate issue was the initial point, triggering an iterative search process all codes submitted by the initial data. Mathematical analysis has shown that the said set contains some ambiguities, which have been founded because of our proposed compressed representation of the set. As a result, the developed method of calculation was limited to the two main stages of research, where the first stage only the of the area were used in the calculations. The proposed approach will significantly reduce the amount of computations at each step in this complex discrete structure.
Molecular system identification for enzyme directed evolution and design
NASA Astrophysics Data System (ADS)
Guan, Xiangying; Chakrabarti, Raj
2017-09-01
The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.
Analysis of the nutritional status of algae by Fourier transform infrared chemical imaging
NASA Astrophysics Data System (ADS)
Hirschmugl, Carol J.; Bayarri, Zuheir-El; Bunta, Maria; Holt, Justin B.; Giordano, Mario
2006-09-01
A new non-destructive method to study the nutritional status of algal cells and their environments is demonstrated. This approach allows rapid examination of whole cells without any or little pre-treatment providing a large amount of information on the biochemical composition of cells and growth medium. The method is based on the analysis of a collection of infrared (IR) spectra for individual cells; each spectrum describes the biochemical composition of a portion of a cell; a complete set of spectra is used to reconstruct an image of the entire cell. To obtain spatially resolved information synchrotron radiation was used as a bright IR source. We tested this method on the green flagellate Euglena gracilis; a comparison was conducted between cells grown in nutrient replete conditions (Type 1) and on cells allowed to deplete their medium (Type 2). Complete sets of spectra for individual cells of both types were analyzed with agglomerative hierarchical clustering, leading to distinct clusters representative of the two types of cells. The average spectra for the clusters confirmed the similarities between the clusters and the types of cells. The clustering analysis, therefore, allows the distinction of cells of the same species, but with different nutritional histories. In order to facilitate the application of the method and reduce manipulation (washing), we analyzed the cells in the presence of residual medium. The results obtained showed that even with residual medium the outcome of the clustering analysis is reliable. Our results demonstrate the applicability FTIR microspectroscopy for ecological and ecophysiological studies.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Thompson, John D.; Chakraborty, Dev P.; Szczepura, Katy; Vamvakas, Ioannis; Tootell, Andrew; Manning, David J.; Hogg, Peter
2015-03-01
Purpose: To investigate the dose saving potential of iterative reconstruction (IR) in a computed tomography (CT) examination of the thorax. Materials and Methods: An anthropomorphic chest phantom containing various configurations of simulated lesions (5, 8, 10 and 12mm; +100, -630 and -800 Hounsfield Units, HU) was imaged on a modern CT system over a tube current range (20, 40, 60 and 80mA). Images were reconstructed with (IR) and filtered back projection (FBP). An ATOM 701D (CIRS, Norfolk, VA) dosimetry phantom was used to measure organ dose. Effective dose was calculated. Eleven observers (15.11+/-8.75 years of experience) completed a free response study, localizing lesions in 544 single CT image slices. A modified jackknife alternative free-response receiver operating characteristic (JAFROC) analysis was completed to look for a significant effect of two factors: reconstruction method and tube current. Alpha was set at 0.05 to control the Type I error in this study. Results: For modified JAFROC analysis of reconstruction method there was no statistically significant difference in lesion detection performance between FBP and IR when figures-of-merit were averaged over tube current (F(1,10)=0.08, p = 0.789). For tube current analysis, significant differences were revealed between multiple pairs of tube current settings (F(3,10) = 16.96, p<0.001) when averaged over image reconstruction method. Conclusion: The free-response study suggests that lesion detection can be optimized at 40mA in this phantom model, a measured effective dose of 0.97mSv. In high-contrast regions the diagnostic value of IR, compared to FBP, is less clear.
Dynamics of a gravity-gradient stabilized flexible spacecraft
NASA Technical Reports Server (NTRS)
Meirovitch, L.; Juang, J. N.
1974-01-01
The dynamics of gravity-gradient stabilized flexible satellite in the neighborhood of a deformed equilibrium configuration are discussed. First the equilibrium configuration was determined by solving a set of nonlinear differential equations. Then stability of motion about the deformed equilibrium was tested by means of the Liapunov direct method. The natural frequencies of oscillation of the complete structure were calculated. The analysis is applicable to the RAE/B satellite.
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Effect of Split-File Digital Workflow on Crown Margin Adaptation
2017-03-30
you in your future publication/presentation efforts. LINDA STEEL -GOODWIN, Col, USAF, BSC Director, Clinical Investigations & Research Support...METHODS Multiple pilot studies were completed to define a working model with appropriate restoration settings ( cement gap 20 µm, extra cement gap 40...depressions for standardization. Right: Zirconia and e.max restorations had a cement gap (CG) = 20 µm ; extra cement gap (ECG) = 40 µm, and distance to
Finite-mode analysis by means of intensity information in fractional optical systems.
Alieva, Tatiana; Bastiaans, Martin J
2002-03-01
It is shown how a coherent optical signal that contains only a finite number of Hermite-Gauss modes can be reconstructed from the knowledge of its Radon-Wigner transform-associated with the intensity distribution in a fractional-Fourier-transform optical system-at only two transversal points. The proposed method can be generalized to any fractional system whose generator transform has a complete orthogonal set of eigenfunctions.
ERIC Educational Resources Information Center
Morgan, Hani
2015-01-01
Online education in K-12 settings has increased considerably in recent years, but there is little research supporting its use at this level. Online courses help students learn at their own pace, select different locations to do their work, and choose flexible times to complete assignments. However, some students learn best in a face-to-face…
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
de Gier, Camilla; Kirkham, Lea-Ann S.
2015-01-01
Nonhemolytic variants of Haemophilus haemolyticus are difficult to differentiate from Haemophilus influenzae despite a wide difference in pathogenic potential. A previous investigation characterized a challenging set of 60 clinical strains using multiple PCRs for marker genes and described strains that could not be unequivocally identified as either species. We have analyzed the same set of strains by multilocus sequence analysis (MLSA) and near-full-length 16S rRNA gene sequencing. MLSA unambiguously allocated all study strains to either of the two species, while identification by 16S rRNA sequence was inconclusive for three strains. Notably, the two methods yielded conflicting identifications for two strains. Most of the “fuzzy species” strains were identified as H. influenzae that had undergone complete deletion of the fucose operon. Such strains, which are untypeable by the H. influenzae multilocus sequence type (MLST) scheme, have sporadically been reported and predominantly belong to a single branch of H. influenzae MLSA phylogenetic group II. We also found evidence of interspecies recombination between H. influenzae and H. haemolyticus within the 16S rRNA genes. Establishing an accurate method for rapid and inexpensive identification of H. influenzae is important for disease surveillance and treatment. PMID:26378279
The use of complete sets of orthogonal operators in spectroscopic studies
NASA Astrophysics Data System (ADS)
Raassen, A. J. J.; Uylings, P. H. M.
1996-01-01
Complete sets of orthogonal operators are used to calculate eigenvalues and eigenvector compositions in complex spectra. The latter are used to transform the LS-transition matrix into realistic intermediate coupling transition probabilities. Calculated transition probabilities for some close lying levels in Ni V and Fe III illustrate the power of the complete orthogonal operator approach.
Stockbridge, Erica L; Miller, Thaddeus L; Carlson, Erin K; Ho, Christine
Targeted identification and treatment of people with latent tuberculosis infection (LTBI) are key components of the US tuberculosis elimination strategy. Because of recent policy changes, some LTBI treatment may shift from public health departments to the private sector. To (1) develop methodology to estimate initiation and completion of treatment with isoniazid for LTBI using claims data, and (2) estimate treatment completion rates for isoniazid regimens from commercial insurance claims. Medical and pharmacy claims data representing insurance-paid services rendered and prescriptions filled between January 2011 and March 2015 were analyzed. Four million commercially insured individuals 0 to 64 years of age. Six-month and 9-month treatment completion rates for isoniazid LTBI regimens. There was an annual isoniazid LTBI treatment initiation rate of 12.5/100 000 insured persons. Of 1074 unique courses of treatment with isoniazid for which treatment completion could be assessed, almost half (46.3%; confidence interval, 43.3-49.3) completed 6 or more months of therapy. Of those, approximately half (48.9%; confidence interval, 44.5-53.3) completed 9 months or more. Claims data can be used to identify and evaluate LTBI treatment with isoniazid occurring in the commercial sector. Completion rates were in the range of those found in public health settings. These findings suggest that the commercial sector may be a valuable adjunct to more traditional venues for tuberculosis prevention. In addition, these newly developed claims-based methods offer a means to gain important insights and open new avenues to monitor, evaluate, and coordinate tuberculosis prevention.
Sánchez, Ariel G.; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; ...
2016-09-30
The cosmological information contained in anisotropic galaxy clustering measurements can often be compressed into a small number of parameters whose posterior distribution is well described by a Gaussian. Here, we present a general methodology to combine these estimates into a single set of consensus constraints that encode the total information of the individual measurements, taking into account the full covariance between the different methods. We also illustrate this technique by applying it to combine the results obtained from different clustering analyses, including measurements of the signature of baryon acoustic oscillations and redshift-space distortions, based on a set of mock cataloguesmore » of the final SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). Our results show that the region of the parameter space allowed by the consensus constraints is smaller than that of the individual methods, highlighting the importance of performing multiple analyses on galaxy surveys even when the measurements are highly correlated. Our paper is part of a set that analyses the final galaxy clustering data set from BOSS. The methodology presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.« less
Fragon: rapid high-resolution structure determination from ideal protein fragments.
Jenkins, Huw T
2018-03-01
Correctly positioning ideal protein fragments by molecular replacement presents an attractive method for obtaining preliminary phases when no template structure for molecular replacement is available. This has been exploited in several existing pipelines. This paper presents a new pipeline, named Fragon, in which fragments (ideal α-helices or β-strands) are placed using Phaser and the phases calculated from these coordinates are then improved by the density-modification methods provided by ACORN. The reliable scoring algorithm provided by ACORN identifies success. In these cases, the resulting phases are usually of sufficient quality to enable automated model building of the entire structure. Fragon was evaluated against two test sets comprising mixed α/β folds and all-β folds at resolutions between 1.0 and 1.7 Å. Success rates of 61% for the mixed α/β test set and 30% for the all-β test set were achieved. In almost 70% of successful runs, fragment placement and density modification took less than 30 min on relatively modest four-core desktop computers. In all successful runs the best set of phases enabled automated model building with ARP/wARP to complete the structure.
Mukaka, Mavuto; White, Sarah A; Terlouw, Dianne J; Mwapasa, Victor; Kalilani-Phiri, Linda; Faragher, E Brian
2016-07-22
Missing outcomes can seriously impair the ability to make correct inferences from randomized controlled trials (RCTs). Complete case (CC) analysis is commonly used, but it reduces sample size and is perceived to lead to reduced statistical efficiency of estimates while increasing the potential for bias. As multiple imputation (MI) methods preserve sample size, they are generally viewed as the preferred analytical approach. We examined this assumption, comparing the performance of CC and MI methods to determine risk difference (RD) estimates in the presence of missing binary outcomes. We conducted simulation studies of 5000 simulated data sets with 50 imputations of RCTs with one primary follow-up endpoint at different underlying levels of RD (3-25 %) and missing outcomes (5-30 %). For missing at random (MAR) or missing completely at random (MCAR) outcomes, CC method estimates generally remained unbiased and achieved precision similar to or better than MI methods, and high statistical coverage. Missing not at random (MNAR) scenarios yielded invalid inferences with both methods. Effect size estimate bias was reduced in MI methods by always including group membership even if this was unrelated to missingness. Surprisingly, under MAR and MCAR conditions in the assessed scenarios, MI offered no statistical advantage over CC methods. While MI must inherently accompany CC methods for intention-to-treat analyses, these findings endorse CC methods for per protocol risk difference analyses in these conditions. These findings provide an argument for the use of the CC approach to always complement MI analyses, with the usual caveat that the validity of the mechanism for missingness be thoroughly discussed. More importantly, researchers should strive to collect as much data as possible.
The Effect of Treating Institution on Outcomes in Head and Neck Cancer
Lassig, Amy Anne D.; Joseph, Anne M.; Lindgren, Bruce R.; Fernandes, Patricia; Cooper, Sarah; Schotzko, Chelsea; Khariwala, Samir; Reynolds, Margaret; Yueh, Bevan
2017-01-01
Objective Factors leading patients with head and neck cancer (HNCA) to seek radiation or chemoradiation in an academic center versus the community are incompletely understood, as are the effects of site of treatment on treatment completion and survival. Study Design Historical cohort study. Setting Tertiary academic center, community practices. Methods A historical cohort study was completed of patients with mucosal HNCA identified by International Classification of Disease, Ninth Revision (ICD-9) codes receiving consultation at the authors’ institution from 2003 to 2008. Patients who received primary and adjuvant radiation at an academic center or in the community were included. The authors compared treatment completion rates and performed univariate and multivariate analyses of treatment outcomes. Results Of 388 patients, 210 completed treatment at an academic center and 145 at a community center (33 excluded, location unknown). Patients with HNCA undergoing radiation at an academic site had more advanced disease (P = .024) and were more likely to receive concurrent chemotherapy. Academic hospitals had a higher percentage of noncurrent smokers, higher median income, and higher percentage of oropharyngeal tumors. There was no significant difference in the rate of planned treatment completion between community and academic centers (93.7% vs 94.7%, P > .81) or rate of treatment breaks (22.4% vs 28.4%, P > .28). On Kaplan-Meier analysis, the 5-year survival rate was 53.2% (95% confidence interval [CI], 45.3%–61.1%) for academic centers and 32.8% (95% CI, 22.0%–43.6%) for community hospitals (P <.001). Conclusion In this cohort, although treatment completion and treatment breaks were similar between academic and community centers, survival rates were higher in patients treated in an academic setting. PMID:22875780
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sato, Takeo; Ozawa, Heita; Hatate, Kazuhiko
Purpose: We aimed to validate our hypothesis that a preoperative chemoradiotherapy regimen with S-1 plus irinotecan is feasible, safe, and active for the management of locally advanced rectal cancer in a single-arm Phase II setting. Methods and Materials: Eligible patients had previously untreated, locally advanced rectal adenocarcinoma. Radiotherapy was administered in fractions of 1.8Gy/d for 25 days. S-1 was administered orally in a fixed daily dose of 80mg/m{sup 2} on Days 1 to 5, 8 to 12, 22 to 26, and 29 to 33. Irinotecan (80mg/m{sup 2}) was infused on Days 1, 8, 22, and 29. Four or more weeksmore » after the completion of the treatment, total mesorectal excision with lateral lymph node dissection was performed. The primary endpoint was the rate of completing treatment in terms of feasibility. The secondary endpoints were the response rate and safety. Results: We enrolled 43 men and 24 women in the study. The number of patients who completed treatment was 58 (86.6%). Overall, 46 patients (68.7%) responded to treatment and 24 (34.7%) had a complete histopathologic response. Three patients had Grade 3 leukopenia, and another three patients had Grade 3 neutropenia. Diarrhea was the most common type of nonhematologic toxicity: 3 patients had Grade 3 diarrhea. Conclusions: A preoperative regimen of S-1, irinotecan, and radiotherapy to the rectum was feasible, and it appeared safe and effective in this nonrandomized Phase II setting. It exhibited a low incidence of adverse events, a high rate of completion of treatment, and an extremely high rate of pathologic complete response.« less
Optical properties (bidirectional reflectance distribution function) of shot fabric.
Lu, R; Koenderink, J J; Kappers, A M
2000-11-01
To study the optical properties of materials, one needs a complete set of the angular distribution functions of surface scattering from the materials. Here we present a convenient method for collecting a large set of bidirectional reflectance distribution function (BRDF) samples in the hemispherical scattering space. Material samples are wrapped around a right-circular cylinder and irradiated by a parallel light source, and the scattered radiance is collected by a digital camera. We tilted the cylinder around its center to collect the BRDF samples outside the plane of incidence. This method can be used with materials that have isotropic and anisotropic scattering properties. We demonstrate this method in a detailed investigation of shot fabrics. The warps and the fillings of shot fabrics are dyed different colors so that the fabric appears to change color at different viewing angles. These color-changing characteristics are found to be related to the physical and geometrical structure of shot fabric. Our study reveals that the color-changing property of shot fabrics is due mainly to an occlusion effect.
Combining point context and dynamic time warping for online gesture recognition
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Chen
2017-05-01
Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.
Real-Time Data Collection Using Text Messaging in a Primary Care Clinic.
Rai, Manisha; Moniz, Michelle H; Blaszczak, Julie; Richardson, Caroline R; Chang, Tammy
2017-12-01
The use of text messaging is nearly ubiquitous and represents a promising method of collecting data from diverse populations. The purpose of this study was to assess the feasibility and acceptability of text message surveys in a clinical setting and to describe key lessons to minimize attrition. We obtained a convenience sample of individuals who entered the waiting room of a low-income, primary care clinic. Participants were asked to answer between 17 and 30 survey questions on a variety of health-related topics, including both open- and closed-ended questions. Descriptive statistics were used to characterize the participants and determine the response rates. Bivariate analyses were used to identify predictors of incomplete surveys. Our convenience sample consisted of 461 individuals. Of those who attempted the survey, 80% (370/461) completed it in full. The mean age of respondents was 35.4 years (standard deviation = 12.4). Respondents were predominantly non-Hispanic black (42%) or non-Hispanic white (41%), female (75%), and with at least some college education (70%). Of those who completed the survey, 84% (312/370) reported willingness to do another text message survey. Those with incomplete surveys answered a median of nine questions before stopping. Smartphone users were less likely to leave the survey incomplete compared with non-smartphone users (p = 0.004). Text-message surveys are a feasible and acceptable method to collect real-time data among low-income, clinic-based populations. Offering participants a setting for immediate survey completion, minimizing survey length, simplifying questions, and allowing "free text" responses for all questions may optimize response rates.
Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures
NASA Technical Reports Server (NTRS)
Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland
1998-01-01
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.
Yip, Kevin Y.; Gerstein, Mark
2009-01-01
Motivation: An important problem in systems biology is reconstructing complete networks of interactions between biological objects by extrapolating from a few known interactions as examples. While there are many computational techniques proposed for this network reconstruction task, their accuracy is consistently limited by the small number of high-confidence examples, and the uneven distribution of these examples across the potential interaction space, with some objects having many known interactions and others few. Results: To address this issue, we propose two computational methods based on the concept of training set expansion. They work particularly effectively in conjunction with kernel approaches, which are a popular class of approaches for fusing together many disparate types of features. Both our methods are based on semi-supervised learning and involve augmenting the limited number of gold-standard training instances with carefully chosen and highly confident auxiliary examples. The first method, prediction propagation, propagates highly confident predictions of one local model to another as the auxiliary examples, thus learning from information-rich regions of the training network to help predict the information-poor regions. The second method, kernel initialization, takes the most similar and most dissimilar objects of each object in a global kernel as the auxiliary examples. Using several sets of experimentally verified protein–protein interactions from yeast, we show that training set expansion gives a measurable performance gain over a number of representative, state-of-the-art network reconstruction methods, and it can correctly identify some interactions that are ranked low by other methods due to the lack of training examples of the involved proteins. Contact: mark.gerstein@yale.edu Availability: The datasets and additional materials can be found at http://networks.gersteinlab.org/tse. PMID:19015141
Microarray missing data imputation based on a set theoretic framework and biological knowledge.
Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong
2006-01-01
Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.
Costing Alternative Birth Settings for Women at Low Risk of Complications: A Systematic Review
Scarf, Vanessa; Catling, Christine; Viney, Rosalie; Homer, Caroline
2016-01-01
Background There is demand from women for alternatives to giving birth in a standard hospital setting however access to these services is limited. This systematic review examines the literature relating to the economic evaluations of birth setting for women at low risk of complications. Methods Searches of the literature to identify economic evaluations of different birth settings of the following electronic databases: MEDLINE, CINAHL, EconLit, Business Source Complete and Maternity and Infant care. Relevant English language publications were chosen using keywords and MeSH terms between 1995 and 2015. Inclusion criteria included studies focussing on the comparison of birth setting. Data were extracted with respect to study design, perspective, PICO principles, and resource use and cost data. Results Eleven studies were included from Australia, Canada, the Netherlands, Norway, the USA, and the UK. Four studies compared costs between homebirth and the hospital setting and the remaining seven focussed on the cost of birth centre care and the hospital setting. Six studies used a cost-effectiveness analysis and the remaining five studies used cost analysis and cost comparison methods. Eight of the 11 studies found a cost saving in the alternative settings. Two found no difference in the cost of the alternative settings and one found an increase in birth centre care. Conclusions There are few studies that compare the cost of birth setting. The variation in the results may be attributable to the cost data collection processes, difference in health systems and differences in which costs were included. A better understanding of the cost of birth setting is needed to inform policy makers and service providers. PMID:26891444
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Incompleteness of Bluetooth protocol conformance test cases
NASA Astrophysics Data System (ADS)
Wu, Peng; Gao, Qiang
2001-10-01
This paper describes a formal method to verify the completeness of conformance testing, in which not only Implementation Under Test (IUT) is formalized in SDL, but also conformance tester is described in SDL so that conformance testing can be performed in simulator provided with CASE tool. The protocol set considered is Bluetooth, an open wireless communication technology. Our research results show that Bluetooth conformance test specification is not complete in that it has only limited coverage and many important capabilities defined in Bluetooth core specification are not tested. We also give a detail report on the missing test cases against Bluetooth core specification, and provide a guide on further test case generation in the future.
Lee, Myung Kyung
2018-01-01
Objectives This study examined the effect of flipped learning in comparison to traditional learning in a surgical nursing practicum. Methods The subjects of this study were 102 nursing students in their third year of university who were scheduled to complete a clinical nursing practicum in an operating room or surgical unit. Participants were randomly assigned to either a flipped learning group (n = 51) or a traditional learning group (n = 51) for the 1-week, 45-hour clinical nursing practicum. The flipped-learning group completed independent e-learning lessons on surgical nursing and received a brief orientation prior to the commencement of the practicum, while the traditional-learning group received a face-to-face orientation and on-site instruction. After the completion of the practicum, both groups completed a case study and a conference. The student's self-efficacy, self-leadership, and problem-solving skills in clinical practice were measured both before and after the one-week surgical nursing practicum. Results Participants' independent goal setting and evaluation of beliefs and assumptions for the subscales of self-leadership and problem-solving skills were compared for the flipped learning group and the traditional learning group. The results showed greater improvement on these indicators for the flipped learning group in comparison to the traditional learning group. Conclusions The flipped learning method might offer more effective e-learning opportunities in terms of self-leadership and problem-solving than the traditional learning method in surgical nursing practicums. PMID:29503755
The Returns to Completion or Partial Completion of a Qualification in the Trades. Research Report
ERIC Educational Resources Information Center
Lu, Tham
2015-01-01
Many students do not complete full qualifications in the vocational education and training (VET) system because their intention is to obtain only the particular skills they require. This can be achieved through the acquisition of skill sets; these enable flexibility in training to quickly respond to changes in the labour market. Skill sets may…
Computing smallest intervention strategies for multiple metabolic networks in a boolean model.
Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya
2015-02-01
This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.
Completeness of breast cancer operative reports in a community care setting.
Eng, Jordan Lang; Baliski, Christopher Ronald; McGahan, Colleen; Cai, Eric
2017-10-01
The narrative operative report represents the traditional means by which breast cancer surgery has been documented. Previous work has established that omissions occur in narrative operative reports produced in an academic setting. The goal of this study was to determine the completeness of breast cancer narrative operative reports produced in a community care setting and to explore the effect of a surgeon's case volume and years in practice on the completeness of these reports. A standardized retrospective review of operative reports produced over a consecutive 2 year period was performed using a set of procedure-specific elements identified through a review of the relevant literature and work done locally. 772 operative reports were reviewed. 45% of all elements were completely documented. A small positive trend was observed between case volume and completeness while a small negative trend was observed between years in practice and completeness. The dictated narrative report inadequately documents breast cancer surgery irrespective of the recording surgeon's volume or experience. An intervention, such as the implementation of synoptic reporting, should be considered in an effort to maximize the utility of the breast cancer operative report. Copyright © 2017. Published by Elsevier Ltd.
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
Krawczel, P D; Klaiber, L M; Thibeau, S S; Dann, H M
2012-08-01
Assessing feeding behavior is important in understanding the effects of nutrition and management on the well-being of dairy cows. Historically, collection of these data from cows fed with a Calan Broadbent Feeding System (American Calan Inc., Northwood, NH) required the labor-intensive practices of direct observation or video review. The objective of this study was to evaluate the agreement between the output of a HOBO change-of-state data logger (Onset Computer Corp., Bourne, MA), mounted to the door shell and latch plate, and video data summarized with continuous sampling. Data (number of feed bin visits per day and feeding time in minutes per day) were recorded with both methods from 26 lactating cows and 10 nonlactating cows for 3 d per cow (n=108). The agreement of the data logger and video methods was evaluated using the REG procedure of SAS to compare the mean response of the methods against the difference between the methods. The maximum allowable difference (MAD) was set at ±3 for bin visits and ±20 min for feeding time. Ranges for feed bin visits (2 to 140 per d) and feeding time (28 to 267 min/d) were established from video data. Using the complete data set, agreement was partially established between the data logger and video methods for feed bin visits, but not established for feeding time. The complete data set generated by the data logger was screened to remove visits of a duration ≤3 s, reflecting a cow unable to enter a feed bin (representing 7% of all data) and ≥5,400 s, reflecting a failure of the data logger to align properly with its corresponding magnetic field (representing <1% of all data). Using the resulting screened data set, agreement was established for feed bin visits and feeding time. For bin visits, 4% of the data was beyond the MAD. For feeding time, 3% of the data was beyond the MAD and 74% of the data was ±1 min. The insignificant P-value, low coefficient of determination, and concentration of the data within the MAD indicate the agreement of the change-of-state data logger and video data. This suggests the usage of a change-of-state data logger to assess the feeding behavior of cows feeding from a Calan Broadbent Feeding System is appropriate. Use of the screening criteria for data analysis is recommended. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gradient augmented level set method for phase change simulations
NASA Astrophysics Data System (ADS)
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Kitamura, Aya; Kawai, Yasuhiko
2015-01-01
Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Photometric theory for wide-angle phenomena
NASA Technical Reports Server (NTRS)
Usher, Peter D.
1990-01-01
An examination is made of the problem posed by wide-angle photographic photometry, in order to extract a photometric-morphological history of Comet P/Halley. Photometric solutions are presently achieved over wide angles through a generalization of an assumption-free moment-sum method. Standard stars in the field allow a complete solution to be obtained for extinction, sky brightness, and the characteristic curve. After formulating Newton's method for the solution of the general nonlinear least-square problem, an implementation is undertaken for a canonical data set. Attention is given to the problem of random and systematic photometric errors.
Optimal investments in digital communication systems in primary exchange area
NASA Astrophysics Data System (ADS)
Garcia, R.; Hornung, R.
1980-11-01
Integer linear optimization theory, following Gomory's method, was applied to the model planning of telecommunication networks in which all future investments are made in digital systems only. The integer decision variables are the number of digital systems set up on cable or radiorelay links that can be installed. The objective function is the total cost of the extension of the existing line capacity to meet the demand between primary and local exchanges. Traffic volume constraints and flow conservation in transit nodes complete the model. Results indicating computing time and method efficiency are illustrated by an example.
Chattopadhyay, Sudip; Chaudhuri, Rajat K; Freed, Karl F
2011-04-28
The improved virtual orbital-complete active space configuration interaction (IVO-CASCI) method enables an economical and reasonably accurate treatment of static correlation in systems with significant multireference character, even when using a moderate basis set. This IVO-CASCI method supplants the computationally more demanding complete active space self-consistent field (CASSCF) method by producing comparable accuracy with diminished computational effort because the IVO-CASCI approach does not require additional iterations beyond an initial SCF calculation, nor does it encounter convergence difficulties or multiple solutions that may be found in CASSCF calculations. Our IVO-CASCI analytical gradient approach is applied to compute the equilibrium geometry for the ground and lowest excited state(s) of the theoretically very challenging 2,6-pyridyne, 1,2,3-tridehydrobenzene and 1,3,5-tridehydrobenzene anionic systems for which experiments are lacking, accurate quantum calculations are almost completely absent, and commonly used calculations based on single reference configurations fail to provide reasonable results. Hence, the computational complexity provides an excellent test for the efficacy of multireference methods. The present work clearly illustrates that the IVO-CASCI analytical gradient method provides a good description of the complicated electronic quasi-degeneracies during the geometry optimization process for the radicaloid anions. The IVO-CASCI treatment produces almost identical geometries as the CASSCF calculations (performed for this study) at a fraction of the computational labor. Adiabatic energy gaps to low lying excited states likewise emerge from the IVO-CASCI and CASSCF methods as very similar. We also provide harmonic vibrational frequencies to demonstrate the stability of the computed geometries.
An ontology-based method for secondary use of electronic dental record data.
Schleyer, Titus Kl; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P; Liu, Kaihong; Hernandez, Pedro
2013-01-01
A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance.
Asada, Naoya; Fedorov, Dmitri G.; Kitaura, Kazuo; Nakanishi, Isao; Merz, Kenneth M.
2012-01-01
We propose an approach based on the overlapping multicenter ONIOM to evaluate intermolecular interaction energies in large systems and demonstrate its accuracy on several representative systems in the complete basis set limit at the MP2 and CCSD(T) level of theory. In the application to the intermolecular interaction energy between insulin dimer and 4′-hydroxyacetanilide at the MP2/CBS level, we use the fragment molecular orbital method for the calculation of the entire complex assigned to the lowest layer in three-layer ONIOM. The developed method is shown to be efficient and accurate in the evaluation of the protein-ligand interaction energies. PMID:23050059
A level set approach for shock-induced α-γ phase transition of RDX
NASA Astrophysics Data System (ADS)
Josyula, Kartik; Rahul; De, Suvranu
2018-02-01
We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.
Integration of SAR and DEM data: Geometrical considerations
NASA Technical Reports Server (NTRS)
Kropatsch, Walter G.
1991-01-01
General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2017-06-01
With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
Electroslag and electrogas welding
NASA Technical Reports Server (NTRS)
Campbell, H. C.
1972-01-01
These two new joining methods perform welding in the vertical position, and therein lies the secret of their impressive advantages in material handling, in weld preparation, in welding speed, in freedom from distortion, and in weld soundness. Once the work has been set in the proper vertical position for welding, no further plate handling is required. The molten filler metal is held in place by copper shoes or dams, and the weld is completed in one pass.
Detecting Edges in Images by Use of Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2003-01-01
A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images
Further distinctive investigations of the Sumudu transform
NASA Astrophysics Data System (ADS)
Belgacem, Fethi Bin Muhammad; Silambarasan, Rathinavel
2017-01-01
The Sumudu transform of time function f (t) is computed by making the transform variable u of Sumudu as factor of function f (t) and then integrated against exp(-t). Being a factor in the original function f (t), becomes f (ut) preserves units and dimension. This preservation property distinguishes Sumudu from other integral transforms. With obtained definition, the related complete set of properties were derived for the Sumudu transform. Framgment of Symbolic C++ program was given for Sumudu computation as series. Also procedure in Maple was given for Sumudu computation in closed form. The Method proposed herein not depends neither on any of homotopy methods such as HPM, HAM nor any of decomposition methods such as ADM.
Toward a zero VAP rate: personal and team approaches in the ICU.
Fox, Maria Y
2006-01-01
In a fast-paced setting like the intensive care unit (ICU), nurses must have appropriate tools and resources in order to implement appropriate and timely interventions. Ventilator-associated pneumonia (VAP) is a costly and potentially fatal outcome for ICU patients that requires timely interventions. Even with established guidelines and care protocols, nurses do not always incorporate best practice interventions into their daily plan of care. Despite the plethora of information and guidelines about how to apply interventions in order to save lives, managers of ICUs are challenged to involve the bedside nurse and other ICU team members to apply these bundles of interventions in a proactive, rather than reactive, manner in order to prevent complications of care. The purpose of this article is to illustrate the success of 2 different methods utilized to improve patient care in the ICU. The first method is a personal process improvement model, and the second method is a team approach model. Both methods were utilized in order to implement interventions in a timely and complete manner to prevent VAP and its related problem, hospital-associated pneumonia, in the ICU setting. Success with these 2 methods has spurred an interest in other patient care initiatives.
Left ventricular endocardial surface detection based on real-time 3D echocardiographic data
NASA Technical Reports Server (NTRS)
Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.
2001-01-01
OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.
A ranking method for the concurrent learning of compounds with various activity profiles.
Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas
2015-01-01
In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.
Marcano Belisario, José S; Jamsek, Jan; Huckvale, Kit; O'Donoghue, John; Morrison, Cecily P; Car, Josip
2015-07-27
Self-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected. To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study.Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.
Modeling and Representation of Human Hearts for Volumetric Measurement
Guan, Qiu; Wang, Wanliang; Wu, Guang
2012-01-01
This paper investigates automatic construction of a three-dimensional heart model from a set of medical images, represents it in a deformable shape, and uses it to perform volumetric measurements. This not only significantly improves its reliability and accuracy but also makes it possible to derive valuable novel information, like various assessment and dynamic volumetric measurements. The method is based on a flexible model trained from hundreds of patient image sets by a genetic algorithm, which takes advantage of complete segmentation of the heart shape to form a geometrical heart model. For an image set of a new patient, an interpretation scheme is used to obtain its shape and evaluate some important parameters. Apart from automatic evaluation of traditional heart functions, some new information of cardiovascular diseases may be recognized from the volumetric analysis. PMID:22162723
Madarame, Haruhiko; Nakada, Satoshi; Ohta, Takahisa; Ishii, Naokata
2018-05-01
To test the applicability of postexercise blood flow restriction (PEBFR) in practical training programmes, we investigated whether PEBFR enhances muscle hypertrophy induced by multiple-set high-load resistance exercise (RE). Seven men completed an eight-week RE programme for knee extensor muscles. Employing a within-subject design, one leg was subjected to RE + PEBFR, whereas contralateral leg to RE only. On each exercise session, participants performed three sets of unilateral knee extension exercise at approximately 70% of their one-repetition maximum for RE leg first, and then performed three sets for RE + PEBFR leg. Immediately after completion of the third set, the proximal portion of the RE + PEBFR leg was compressed with an air-pressure cuff for 5 min at a pressure ranging from 100 to 150 mmHg. If participants could perform 10 repetitions for three sets in two consecutive exercise sessions, the work load was increased by 5% at the next exercise session. Muscle thickness and strength of knee extensor muscles were measured before and after the eight-week training period and after the subsequent eight-week detraining period. There was a main effect of time but no condition × time interaction or main effect of condition for muscle thickness and strength. Both muscle thickness and strength increased after the training period independent of the condition. This result suggests that PEBFR would not be an effective training method at least in an early phase of adaptation to high-load resistance exercise. © 2017 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Mungun, Tuya; Dorj, Narangerel; Volody, Baigal; Chuluundorj, Uranjargal; Munkhbat, Enkhtuya; Danzan, Gerelmaa; Nguyen, Cattram D; La Vincente, Sophie; Russell, Fiona
2017-01-01
Introduction Monitoring of vaccination coverage is vital for the prevention and control of vaccine-preventable diseases. Electronic immunization registers have been increasingly adopted to assist with the monitoring of vaccine coverage; however, there is limited literature about the use of electronic registers in low- and middle-income countries such as Mongolia. We aimed to determine the accuracy and completeness of the newly introduced electronic immunization register for calculating vaccination coverage and determining vaccine effectiveness within two districts in Mongolia in comparison to written health provider records. Methods We conducted a cross-sectional record review among children 2–23 months of age vaccinated at immunization clinics within the two districts. We linked data from written records with the electronic immunization register using the national identification number to determine the completeness and accuracy of the electronic register. Results Both completeness (90.9%; 95% CI: 88.4–93.4) and accuracy (93.3%; 95% CI: 84.1–97.4) of the electronic immunization register were high when compared to written records. The increase in completeness over time indicated a delay in data entry. Conclusion Through this audit, we have demonstrated concordance between a newly introduced electronic register and health provider records in a middle-income country setting. Based on this experience, we recommend that electronic registers be accompanied by routine quality assurance procedures for the monitoring of vaccination programmes in such settings. PMID:29051836
Diaz, Naryttza N; Krause, Lutz; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W
2009-01-01
Background Metagenomics, or the sequencing and analysis of collective genomes (metagenomes) of microorganisms isolated from an environment, promises direct access to the "unculturable majority". This emerging field offers the potential to lay solid basis on our understanding of the entire living world. However, the taxonomic classification is an essential task in the analysis of metagenomics data sets that it is still far from being solved. We present a novel strategy to predict the taxonomic origin of environmental genomic fragments. The proposed classifier combines the idea of the k-nearest neighbor with strategies from kernel-based learning. Results Our novel strategy was extensively evaluated using the leave-one-out cross validation strategy on fragments of variable length (800 bp – 50 Kbp) from 373 completely sequenced genomes. TACOA is able to classify genomic fragments of length 800 bp and 1 Kbp with high accuracy until rank class. For longer fragments ≥ 3 Kbp accurate predictions are made at even deeper taxonomic ranks (order and genus). Remarkably, TACOA also produces reliable results when the taxonomic origin of a fragment is not represented in the reference set, thus classifying such fragments to its known broader taxonomic class or simply as "unknown". We compared the classification accuracy of TACOA with the latest intrinsic classifier PhyloPythia using 63 recently published complete genomes. For fragments of length 800 bp and 1 Kbp the overall accuracy of TACOA is higher than that obtained by PhyloPythia at all taxonomic ranks. For all fragment lengths, both methods achieved comparable high specificity results up to rank class and low false negative rates are also obtained. Conclusion An accurate multi-class taxonomic classifier was developed for environmental genomic fragments. TACOA can predict with high reliability the taxonomic origin of genomic fragments as short as 800 bp. The proposed method is transparent, fast, accurate and the reference set can be easily updated as newly sequenced genomes become available. Moreover, the method demonstrated to be competitive when compared to the most current classifier PhyloPythia and has the advantage that it can be locally installed and the reference set can be kept up-to-date. PMID:19210774
Bowles, K. H.; Adelsberger, M. C.; Chittams, J. L.; Liao, C.
2014-01-01
Summary Background Homecare is an important and effective way of managing chronic illnesses using skilled nursing care in the home. Unlike hospitals and ambulatory settings, clinicians visit patients at home at different times, independent of each other. Twenty-nine percent of 10,000 homecare agencies in the United States have adopted point-of-care EHRs. Yet, relatively little is known about the growing use of homecare EHRs. Objective Researchers compared workflow, financial billing, and patient outcomes before and after implementation to evaluate the impact of a homecare point-of-care EHR. Methods The design was a pre/post observational study embedded in a mixed methods study. The setting was a Philadelphia-based homecare agency with 137 clinicians. Data sources included: (1) clinician EHR documentation completion; (2) EHR usage data; (3) Medicare billing data; (4) an EHR Nurse Satisfaction survey; (5) clinician observations; (6) clinician interviews; and (7) patient outcomes. Results Clinicians were satisfied with documentation timeliness and team communication. Following EHR implementation, 90% of notes were completed within the 1-day compliance interval (n = 56,702) compared with 30% of notes completed within the 7-day compliance interval in the pre-implementation period (n = 14,563; OR 19, p <. 001). Productivity in the number of clinical notes documented post-implementation increased almost 10-fold compared to pre-implementation. Days to Medicare claims fell from 100 days pre-implementation to 30 days post-implementation, while the census rose. EHR implementation impact on patient outcomes was limited to some behavioral outcomes. Discussion Findings from this homecare EHR study indicated clinician EHR use enabled a sustained increase in productivity of note completion, as well as timeliness of documentation and billing for reimbursement with limited impact on improving patient outcomes. As EHR adoption increases to better meet the needs of the growing population of older people with chronic health conditions, these results can inform homecare EHR development and implementation. PMID:25024760
A correlated ab initio study of linear carbon-chain radicals CnH (n = 2-7)
NASA Technical Reports Server (NTRS)
Woon, D. E.; Loew, G. H. (Principal Investigator)
1995-01-01
Linear carbon-chain radicals CnH for n = 2-7 have been studied with correlation consistent valence and core-valence basis sets and the coupled cluster method RCCSD(T). Equilibrium structures, rotational constants, and dipole moments are reported and compared with available experimental data. The ground state of the even-n series changes from 2 sigma+ to 2 pi as the chain is extended. For C4H, the 2 sigma+ state was found to lie only 72 cm-1 below the 2 pi state in the estimated complete basis set limit for valence correlation. The C2H- and C3H- anions have also been characterized.
Francis, Diane B; Cates, Joan R; Wagner, Kyla P Garrett; Zola, Tracey; Fitter, Jenny E; Coyne-Beasley, Tamera
2017-07-01
This systematic review examines the effectiveness of communication technology interventions on HPV vaccination initiation and completion. A comprehensive search strategy was used to identify existing randomized controlled trials testing the impact of computer-, mobile- or internet-based interventions on receipt of any dose of the HPV vaccine. Twelve relevant studies were identified with a total of 38,945 participants. The interventions were delivered using several different methods, including electronic health record (i.e. recall/reminder) prompts, text messaging, automated phone calls, interactive computer videos, and email. Vaccine initiation and completion was greater for technology-based studies relative to their control conditions. There is evidence that interventions utilizing communication technologies as their sole or primary mode for HPV vaccination intervention delivery may increase vaccination coverage. Communication technologies hold much promise for the future of HPV vaccination efforts, especially initiatives in practice-based settings. Copyright © 2017 Elsevier B.V. All rights reserved.
Hewlett, S; Clarke, B; O'Brien, A; Hammond, A; Ryan, S; Kay, L; Richards, P; Almeida, C
2008-07-01
Rheumatological conditions are common, thus nurses (Ns) occupational therapists (OTs) and physiotherapists (PTs) require at least basic rheumatology knowledge upon qualifying. The aim of this study was to develop a core set of teaching topics and potential ways of delivering them. A modified Delphi technique was used for clinicians to develop preliminary core sets of teaching topics for each profession. Telephone interviews with educationalists explored their views on these, and challenges and solutions for delivering them. Inter-professional workshops enabled clinicians and educationalists to finalize the core set together, and generate methods for delivery. Thirty-nine rheumatology clinicians (12N, 14OT, 13PT) completed the Delphi consensus, proposing three preliminary core sets (N71 items, OT29, PT26). Nineteen educationalists (6N, 7OT, 6PT) participated in telephone interviews, raising concerns about disease-specific vs generic teaching and proposing many methods for delivery. Three inter-professional workshops involved 34 participants (clinicians: N12, OT9, PT5; educationalists: N2, OT3, PT2; Patient 1) who reached consensus on a single core set comprising six teaching units: Anatomy and Physiology; Assessment; Management and Intervention; Psychosocial Issues; Patient Education; and the Multi-disciplinary Team, recommending some topics within the units receive greater depth for some professions. An innovative range of delivery options was generated plus two brief interventions: a Rheumatology Chat Show and a Rheumatology Road Show. Working together, clinicians and educationalists proposed a realistic core set of rheumatology topics for undergraduate health professionals. They proposed innovative delivery methods, with collaboration between educationalists, clinicians and patients strongly recommended. These potential interventions need testing.
Wilson, Annabelle M; Magarey, Anthea M; Dollman, James; Jones, Michelle; Mastersson, Nadia
2010-08-01
To describe the rationale, development and implementation of the quantitative component of evaluation of a multi-setting, multi-strategy, community-based childhood obesity prevention project (the eat well be active (ewba) Community Programs) and the challenges associated with this process and some potential solutions. ewba has a quasi-experimental design with intervention and comparison communities. Baseline data were collected in 2006 and post-intervention measures will be taken from a non-matched cohort in 2009. Schoolchildren aged 10-12 years were chosen as one litmus group for evaluation purposes. Thirty-nine primary schools in two metropolitan and two rural communities in South Australia. A total of 1732 10-12-year-old school students completed a nutrition and/or a physical activity questionnaire and 1637 had anthropometric measures taken; 983 parents, 286 teachers, thirty-six principals, twenty-six canteen and thirteen out-of-school-hours care (OSHC) workers completed Program-specific questionnaires developed for each of these target groups. The overall child response rate for the study was 49 %. Sixty-five per cent, 43 %, 90 %, 90 % and 68 % of parent, teachers, principals, canteen and OSHC workers respectively, completed and returned questionnaires. A number of practical, logistical and methodological challenges were experienced when undertaking this data collection. Learnings from the process of quantitative baseline data collection for the ewba Community Programs can provide insights for other researchers planning similar studies with similar methods, particularly those evaluating multi-strategy programmes across multiple settings.
Predictions of CD4 lymphocytes’ count in HIV patients from complete blood count
2013-01-01
Background HIV diagnosis, prognostic and treatment requires T CD4 lymphocytes’ number from flow cytometry, an expensive technique often not available to people in developing countries. The aim of this work is to apply a previous developed methodology that predicts T CD4 lymphocytes’ value based on total white blood cell (WBC) count and lymphocytes count applying sets theory, from information taken from the Complete Blood Count (CBC). Methods Sets theory was used to classify into groups named A, B, C and D the number of leucocytes/mm3, lymphocytes/mm3, and CD4/μL3 subpopulation per flow cytometry of 800 HIV diagnosed patients. Union between sets A and C, and B and D were assessed, and intersection between both unions was described in order to establish the belonging percentage to these sets. Results were classified into eight ranges taken by 1000 leucocytes/mm3, calculating the belonging percentage of each range with respect to the whole sample. Results Intersection (A ∪ C) ∩ (B ∪ D) showed an effectiveness in the prediction of 81.44% for the range between 4000 and 4999 leukocytes, 91.89% for the range between 3000 and 3999, and 100% for the range below 3000. Conclusions Usefulness and clinical applicability of a methodology based on sets theory were confirmed to predict the T CD4 lymphocytes’ value, beginning with WBC and lymphocytes’ count from CBC. This methodology is new, objective, and has lower costs than the flow cytometry which is currently considered as Gold Standard. PMID:24034560
Recruitment for a Diabetes Prevention Program translation effort in a worksite setting.
Taradash, J; Kramer, M; Molenaar, D; Arena, V; Vanderwood, K; Kriska, Andrea M
2015-03-01
The success of the Diabetes Prevention Program (DPP) lifestyle intervention has led to community-based translation efforts in a variety of settings. One community setting which holds promise for the delivery of prevention intervention is the worksite; however, information regarding recruitment in this setting is limited. The current effort describes the initial processes surrounding provision of an adapted DPP lifestyle intervention at a corporate worksite. Investigators and key management at the worksite collaborated to develop and implement a recruitment plan for the intervention focusing on 1) in-person onsite activities and 2) implementation of a variety of media recruitment tools and methods. Adult, non-diabetic overweight/obese employees and family members with pre-diabetes and/or the metabolic syndrome were eligible for the study. Telephone pre-screening was completed for 176 individuals resulting in 171 eligible for onsite screening. Of that number, 160 completed onsite screening, 107 met eligibility criteria, and 89 enrolled in the study. Support from worksite leadership, an invested worksite planning team and a solid recruitment plan consisting of multiple strategies were identified as crucial elements of this effective workplace recruitment effort. A worksite team successfully developed and implemented a recruitment plan using existing mechanisms appropriate to that worksite in order to identify and enroll eligible individuals. The results of this effort indicate that employee recruitment in a worksite setting is feasible as the first step in offering onsite behavioral lifestyle intervention programs as part of a widespread dissemination plan to prevent diabetes and lower risk for cardiovascular disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Schiefer, H; von Toggenburg, F; Seelentag, W W; Plasswilm, L; Ries, G; Schmid, H-P; Leippold, T; Krusche, B; Roth, J; Engeler, D
2009-08-21
The dose coverage of low dose rate (LDR)-brachytherapy for localized prostate cancer is monitored 4-6 weeks after intervention by contouring the prostate on computed tomography and/or magnetic resonance imaging sets. Dose parameters for the prostate (V100, D90 and D80) provide information on the treatment quality. Those depend strongly on the delineation of the prostate contours. We therefore systematically investigated the contouring process for 21 patients with five examiners. The prostate structures were compared with one another using topological procedures based on Boolean algebra. The coincidence number C(V) measures the agreement between a set of structures. The mutual coincidence C(i, j) measures the agreement between two structures i and j, and the mean coincidence C(i) compares a selected structure i with the remaining structures in a set. All coincidence parameters have a value of 1 for complete coincidence of contouring and 0 for complete absence. The five patients with the lowest C(V) values were discussed, and rules for contouring the prostate have been formulated. The contouring and assessment were repeated after 3 months for the same five patients. All coincidence parameters have been improved after instruction. This shows objectively that training resulted in more consistent contouring across examiners.
Li, Y Q; Varandas, A J C
2010-09-16
An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark
NASA Astrophysics Data System (ADS)
Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.
2017-05-01
This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
Patient Compliance With Electronic Patient Reported Outcomes Following Shoulder Arthroscopy.
Makhni, Eric C; Higgins, John D; Hamamoto, Jason T; Cole, Brian J; Romeo, Anthony A; Verma, Nikhil N
2017-11-01
To determine the patient compliance in completing electronically administered patient-reported outcome (PRO) scores following shoulder arthroscopy, and to determine if dedicated research assistants improve patient compliance. Patients undergoing arthroscopic shoulder surgery from January 1, 2014, to December 31, 2014, were prospectively enrolled into an electronic data collection system with retrospective review of compliance data. A total of 143 patients were included in this study; 406 patients were excluded (for any or all of the following reasons, such as incomplete follow-up, inaccessibility to the order sets, and inability to complete the order sets). All patients were assigned an order set of PROs through an electronic reporting system, with order sets to be completed prior to surgery, as well as 6 and 12 months postoperatively. Compliance rates of form completion were documented. Patients who underwent arthroscopic anterior and/or posterior stabilization were excluded. The average age of the patients was 53.1 years, ranging from 20 to 83. Compliance of form completion was highest preoperatively (76%), and then dropped subsequently at 6 months postoperatively (57%) and 12 months postoperatively (45%). Use of research assistants improved compliance by approximately 20% at each time point. No differences were found according to patient gender and age group. Of those completing forms, a majority completed forms at home or elsewhere prior to returning to the office for the clinic visit. Electronic administration of PRO may decrease the amount of time required in the office setting for PRO completion by patients. This may be mutually beneficial to providers and patients. It is unclear if an electronic system improves patient compliance in voluntary completion PRO. Compliance rates at final follow-up remain a concern if data are to be used for establishing quality or outcome metrics. Level IV, case series. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Electron-helium S-wave model benchmark calculations. I. Single ionization and single excitation
NASA Astrophysics Data System (ADS)
Bartlett, Philip L.; Stelbovics, Andris T.
2010-02-01
A full four-body implementation of the propagating exterior complex scaling (PECS) method [J. Phys. B 37, L69 (2004)] is developed and applied to the electron-impact of helium in an S-wave model. Time-independent solutions to the Schrödinger equation are found numerically in coordinate space over a wide range of energies and used to evaluate total and differential cross sections for a complete set of three- and four-body processes with benchmark precision. With this model we demonstrate the suitability of the PECS method for the complete solution of the full electron-helium system. Here we detail the theoretical and computational development of the four-body PECS method and present results for three-body channels: single excitation and single ionization. Four-body cross sections are presented in the sequel to this article [Phys. Rev. A 81, 022716 (2010)]. The calculations reveal structure in the total and energy-differential single-ionization cross sections for excited-state targets that is due to interference from autoionization channels and is evident over a wide range of incident electron energies.
Determination of Orbital Parameters for Visual Binary Stars Using a Fourier-Series Approach
NASA Astrophysics Data System (ADS)
Brown, D. E.; Prager, J. R.; DeLeo, G. G.; McCluskey, G. E., Jr.
2001-12-01
We expand on the Fourier transform method of Monet (ApJ 234, 275, 1979) to infer the orbital parameters of visual binary stars, and we present results for several systems, both simulated and real. Although originally developed to address binary systems observed through at least one complete period, we have extended the method to deal explicitly with cases where the orbital data is less complete. This is especially useful in cases where the period is so long that only a fragment of the orbit has been recorded. We utilize Fourier-series fitting methods appropriate to data sets covering less than one period and containing random measurement errors. In so doing, we address issues of over-determination in fitting the data and the reduction of other deleterious Fourier-series artifacts. We developed our algorithm using the MAPLE mathematical software code, and tested it on numerous "synthetic" systems, and several real binaries, including Xi Boo, 24 Aqr, and Bu 738. This work was supported at Lehigh University by the Delaware Valley Space Grant Consortium and by NSF-REU grant PHY-9820301.
Complexity-reduced implementations of complete and null-space-based linear discriminant analysis.
Lu, Gui-Fu; Zheng, Wenming
2013-10-01
Dimensionality reduction has become an important data preprocessing step in a lot of applications. Linear discriminant analysis (LDA) is one of the most well-known dimensionality reduction methods. However, the classical LDA cannot be used directly in the small sample size (SSS) problem where the within-class scatter matrix is singular. In the past, many generalized LDA methods has been reported to address the SSS problem. Among these methods, complete linear discriminant analysis (CLDA) and null-space-based LDA (NLDA) provide good performances. The existing implementations of CLDA are computationally expensive. In this paper, we propose a new and fast implementation of CLDA. Our proposed implementation of CLDA, which is the most efficient one, is equivalent to the existing implementations of CLDA in theory. Since CLDA is an extension of null-space-based LDA (NLDA), our implementation of CLDA also provides a fast implementation of NLDA. Experiments on some real-world data sets demonstrate the effectiveness of our proposed new CLDA and NLDA algorithms. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ganatra, B; Kalyanwala, S; Elul, B; Coyaji, K; Tewari, S
2010-01-01
We explored women's perspectives on using medical abortion, including their reasons for selecting the method, their experiences with it and their thoughts regarding demedicalisation of part or all of the process. Sixty-three women from two urban clinics in India were interviewed within four weeks of abortion completion using a semi-structured in-depth interview guide. While women appreciated the non-invasiveness of medical abortion, other factors influencing method selection were family support and distance from the facility. The degree of medicalisation that women wanted or felt was necessary also depended on the way expectations were set by their providers. Confirmation of abortion completion was a source of anxiety for many women and led to unnecessary interventions in a few cases. Ultimately, experiences depended more on women's expectations about the method, and on the level of emotional and logistic support they received rather than on inherent characteristics of the method. These findings emphasise the circumstances under which women make reproductive choices and underscore the need to tailor service delivery to meet women's needs. Women-centred counselling and care that takes into consideration individual circumstances are needed.
Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser
2015-01-01
Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network. PMID:26330082
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation.
Ritschl, Ludwig; Kuntz, Jan; Fleischmann, Christof; Kachelrieß, Marc
2016-05-01
In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled data set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.
Maddali, S.; Calvo-Almazan, I.; Almer, J.; ...
2018-03-21
Coherent X-ray photons with energies higher than 50 keV offer new possibilities for imaging nanoscale lattice distortions in bulk crystalline materials using Bragg peak phase retrieval methods. However, the compression of reciprocal space at high energies typically results in poorly resolved fringes on an area detector, rendering the diffraction data unsuitable for the three-dimensional reconstruction of compact crystals. To address this problem, we propose a method by which to recover fine fringe detail in the scattered intensity. This recovery is achieved in two steps: multiple undersampled measurements are made by in-plane sub-pixel motion of the area detector, then this datamore » set is passed to a sparsity-based numerical solver that recovers fringe detail suitable for standard Bragg coherent diffraction imaging (BCDI) reconstruction methods of compact single crystals. The key insight of this paper is that sparsity in a BCDI data set can be enforced by recognising that the signal in the detector, though poorly resolved, is band-limited. This requires fewer in-plane detector translations for complete signal recovery, while adhering to information theory limits. Lastly, we use simulated BCDI data sets to demonstrate the approach, outline our sparse recovery strategy, and comment on future opportunities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddali, S.; Calvo-Almazan, I.; Almer, J.
Coherent X-ray photons with energies higher than 50 keV offer new possibilities for imaging nanoscale lattice distortions in bulk crystalline materials using Bragg peak phase retrieval methods. However, the compression of reciprocal space at high energies typically results in poorly resolved fringes on an area detector, rendering the diffraction data unsuitable for the three-dimensional reconstruction of compact crystals. To address this problem, we propose a method by which to recover fine fringe detail in the scattered intensity. This recovery is achieved in two steps: multiple undersampled measurements are made by in-plane sub-pixel motion of the area detector, then this datamore » set is passed to a sparsity-based numerical solver that recovers fringe detail suitable for standard Bragg coherent diffraction imaging (BCDI) reconstruction methods of compact single crystals. The key insight of this paper is that sparsity in a BCDI data set can be enforced by recognising that the signal in the detector, though poorly resolved, is band-limited. This requires fewer in-plane detector translations for complete signal recovery, while adhering to information theory limits. Lastly, we use simulated BCDI data sets to demonstrate the approach, outline our sparse recovery strategy, and comment on future opportunities.« less
An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion
Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.
2017-01-01
In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
Maddali, S; Calvo-Almazan, I; Almer, J; Kenesei, P; Park, J-S; Harder, R; Nashed, Y; Hruszkewycz, S O
2018-03-21
Coherent X-ray photons with energies higher than 50 keV offer new possibilities for imaging nanoscale lattice distortions in bulk crystalline materials using Bragg peak phase retrieval methods. However, the compression of reciprocal space at high energies typically results in poorly resolved fringes on an area detector, rendering the diffraction data unsuitable for the three-dimensional reconstruction of compact crystals. To address this problem, we propose a method by which to recover fine fringe detail in the scattered intensity. This recovery is achieved in two steps: multiple undersampled measurements are made by in-plane sub-pixel motion of the area detector, then this data set is passed to a sparsity-based numerical solver that recovers fringe detail suitable for standard Bragg coherent diffraction imaging (BCDI) reconstruction methods of compact single crystals. The key insight of this paper is that sparsity in a BCDI data set can be enforced by recognising that the signal in the detector, though poorly resolved, is band-limited. This requires fewer in-plane detector translations for complete signal recovery, while adhering to information theory limits. We use simulated BCDI data sets to demonstrate the approach, outline our sparse recovery strategy, and comment on future opportunities.
White matter tracts associated with set-shifting in healthy aging.
Perry, Michele E; McDonald, Carrie R; Hagler, Donald J; Gharapetian, Lusineh; Kuperman, Joshua M; Koyama, Alain K; Dale, Anders M; McEvoy, Linda K
2009-11-01
Attentional set-shifting ability, commonly assessed with the Trail Making Test (TMT), decreases with increasing age in adults. Since set-shifting performance relies on activity in widespread brain regions, deterioration of the white matter tracts that connect these regions may underlie the age-related decrease in performance. We used an automated fiber tracking method to investigate the relationship between white matter integrity in several cortical association tracts and TMT performance in a sample of 24 healthy adults, 21-80 years. Diffusion tensor images were used to compute average fractional anisotropy (FA) for five cortical association tracts, the corpus callosum (CC), and the corticospinal tract (CST), which served as a control. Results showed that advancing age was associated with declines in set-shifting performance and with decreased FA in the CC and in association tracts that connect frontal cortex to more posterior brain regions, including the inferior fronto-occipital fasciculus (IFOF), uncinate fasciculus (UF), and superior longitudinal fasciculus (SLF). Declines in average FA in these tracts, and in average FA of the right inferior longitudinal fasciculus (ILF), were associated with increased time to completion on the set-shifting subtask of the TMT but not with the simple sequencing subtask. FA values in these tracts were strong mediators of the effect of age on set-shifting performance. Automated tractography methods can enhance our understanding of the fiber systems involved in performance of specific cognitive tasks and of the functional consequences of age-related changes in those systems.
Systematic Review of Community-Based Childhood Obesity Prevention Studies
Segal, Jodi; Wu, Yang; Wilson, Renee; Wang, Youfa
2013-01-01
OBJECTIVE: This study systematically reviewed community-based childhood obesity prevention programs in the United States and high-income countries. METHODS: We searched Medline, Embase, PsychInfo, CINAHL, clinicaltrials.gov, and the Cochrane Library for relevant English-language studies. Studies were eligible if the intervention was primarily implemented in the community setting; had at least 1 year of follow-up after baseline; and compared results from an intervention to a comparison group. Two independent reviewers conducted title scans and abstract reviews and reviewed the full articles to assess eligibility. Each article received a double review for data abstraction. The second reviewer confirmed the first reviewer’s data abstraction for completeness and accuracy. RESULTS: Nine community-based studies were included; 5 randomized controlled trials and 4 non–randomized controlled trials. One study was conducted only in the community setting, 3 were conducted in the community and school setting, and 5 were conducted in the community setting in combination with at least 1 other setting such as the home. Desirable changes in BMI or BMI z-score were found in 4 of the 9 studies. Two studies reported significant improvements in behavioral outcomes (1 in physical activity and 1 in vegetable intake). CONCLUSIONS: The strength of evidence is moderate that a combined diet and physical activity intervention conducted in the community with a school component is more effective at preventing obesity or overweight. More research and consistent methods are needed to understand the comparative effectiveness of childhood obesity prevention programs in the community setting. PMID:23753099
2011-01-01
Background Implementing a primary care clinical research study in several countries can make it possible to recruit sufficient patients in a short period of time that allows important clinical questions to be answered. Large multi-country studies in primary care are unusual and are typically associated with challenges requiring innovative solutions. We conducted a multi-country study and through this paper, we share reflections on the challenges we faced and some of the solutions we developed with a special focus on the study set up, structure and development of Primary Care Networks (PCNs). Method GRACE-01 was a multi-European country, investigator-driven prospective observational study implemented by 14 Primary Care Networks (PCNs) within 13 European Countries. General Practitioners (GPs) recruited consecutive patients with an acute cough. GPs completed a case report form (CRF) and the patient completed a daily symptom diary. After study completion, the coordinating team discussed the phases of the study and identified challenges and solutions that they considered might be interesting and helpful to researchers setting up a comparable study. Results The main challenges fell within three domains as follows: i) selecting, setting up and maintaining PCNs; ii) designing local context-appropriate data collection tools and efficient data management systems; and iii) gaining commitment and trust from all involved and maintaining enthusiasm. The main solutions for each domain were: i) appointing key individuals (National Network Facilitator and Coordinator) with clearly defined tasks, involving PCNs early in the development of study materials and procedures. ii) rigorous back translations of all study materials and the use of information systems to closely monitor each PCNs progress; iii) providing strong central leadership with high level commitment to the value of the study, frequent multi-method communication, establishing a coherent ethos, celebrating achievements, incorporating social events and prizes within meetings, and providing a framework for exploitation of local data. Conclusions Many challenges associated with multi-country primary care research can be overcome by engendering strong, effective communication, commitment and involvement of all local researchers. The practical solutions identified and the lessons learned in implementing the GRACE-01 study may assist in establishing other international primary care clinical research platforms. Trial registration ClinicalTrials.gov Identifier: NCT00353951 PMID:21794112
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Attitudes of Malaysian general hospital staff towards patients with mental illness and diabetes
2011-01-01
Background The context of the study is the increased assessment and treatment of persons with mental illness in general hospital settings by general health staff, as the move away from mental hospitals gathers pace in low and middle income countries. The purpose of the study was to examine whether general attitudes of hospital staff towards persons with mental illness, and extent of mental health training and clinical experience, are associated with different attitudes and behaviours towards a patient with mental illness than towards a patients with a general health problem - diabetes. Methods General hospital health professionals in Malaysia were randomly allocated one of two vignettes, one describing a patient with mental illness and the other a patient with diabetes, and invited to complete a questionnaire examining attitudes and health care practices in relation to the case. The questionnaires completed by respondents included questions on demographics, training in mental health, exposure in clinical practice to people with mental illness, attitudes and expected health care behaviour towards the patient in the vignette, and a general questionnaire exploring negative attitudes towards people with mental illness. Questionnaires with complete responses were received from 654 study participants. Results Stigmatising attitudes towards persons with mental illness were common. Those responding to the mental illness vignette (N = 356) gave significantly lower ratings on care and support and higher ratings on avoidance and negative stereotype expectations compared with those responding the diabetes vignette (N = 298). Conclusions Results support the view that, in the Malaysian setting, patients with mental illness may receive differential care from general hospital staff and that general stigmatising attitudes among professionals may influence their care practices. More direct measurement of clinician behaviours than able to be implemented through survey method is required to support these conclusions. PMID:21569613
Li, Yumei; Xiang, Yang; Xu, Chao; Shen, Hui; Deng, Hongwen
2018-01-15
The development of next-generation sequencing technologies has facilitated the identification of rare variants. Family-based design is commonly used to effectively control for population admixture and substructure, which is more prominent for rare variants. Case-parents studies, as typical strategies in family-based design, are widely used in rare variant-disease association analysis. Current methods in case-parents studies are based on complete case-parents data; however, parental genotypes may be missing in case-parents trios, and removing these data may lead to a loss in statistical power. The present study focuses on testing for rare variant-disease association in case-parents study by allowing for missing parental genotypes. In this report, we extended the collapsing method for rare variant association analysis in case-parents studies to allow for missing parental genotypes, and investigated the performance of two methods by using the difference of genotypes between affected offspring and their corresponding "complements" in case-parent trios and TDT framework. Using simulations, we showed that, compared with the methods just only using complete case-parents data, the proposed strategy allowing for missing parental genotypes, or even adding unrelated affected individuals, can greatly improve the statistical power and meanwhile is not affected by population stratification. We conclude that adding case-parents data with missing parental genotypes to complete case-parents data set can greatly improve the power of our strategy for rare variant-disease association.
Sun, Zongyang; Tee, Boon Ching; Kennedy, Kelly S.; Kennedy, Patrick M.; Kim, Do-Gyoon; Mallery, Susan R.; Fields, Henry W.
2013-01-01
Purpose Bone regeneration through distraction osteogenesis (DO) is promising but remarkably slow. To accelerate it, autologous mesenchymal stem cells have been directly injected to the distraction site in a few recent studies. Compared to direct injection, a scaffold-based method can provide earlier cell delivery with potentially better controlled cell distribution and retention. This pilot project investigated a scaffold-based cell-delivery approach in a porcine mandibular DO model. Materials and Methods Eleven adolescent domestic pigs were used for two major sets of studies. The in-vitro set established methodologies to: aspirate bone marrow from the tibia; isolate, characterize and expand bone marrow-derived mesenchymal stem cells (BM-MSCs); enhance BM-MSC osteogenic differentiation using FGF-2; and confirm cell integration with a gelatin-based Gelfoam scaffold. The in-vivo set transplanted autologous stem cells into the mandibular distraction sites using Gelfoam scaffolds; completed a standard DO-course and assessed bone regeneration by macroscopic, radiographic and histological methods. Repeated-measure ANOVAs and t-tests were used for statistical analyses. Results From aspirated bone marrow, multi-potent, heterogeneous BM-MSCs purified from hematopoietic stem cell contamination were obtained. FGF-2 significantly enhanced pig BM-MSC osteogenic differentiation and proliferation, with 5 ng/ml determined as the optimal dosage. Pig BM-MSCs integrated readily with Gelfoam and maintained viability and proliferative ability. After integration with Gelfoam scaffolds, 2.4–5.8×107 autologous BM-MSCs (undifferentiated or differentiated) were transplanted to each experimental DO site. Among 8 evaluable DO sites included in the final analyses, the experimental DO sites demonstrated less interfragmentary mobility, more advanced gap obliteration, higher mineral content and faster mineral apposition than the control sites, and all transplanted scaffolds were completely degraded. Conclusion It is technically feasible and biologically sound to deliver autologous BM-MSCs to the distraction site immediately after osteotomy using a Gelfoam scaffold to enhance mandibular DO. PMID:24040314
Standardization of Analysis Sets for Reporting Results from ADNI MRI Data
Wyman, Bradley T.; Harvey, Danielle J.; Crawford, Karen; Bernstein, Matt A.; Carmichael, Owen; Cole, Patricia E.; Crane, Paul; DeCarli, Charles; Fox, Nick C.; Gunter, Jeffrey L.; Hill, Derek; Killiany, Ronald J.; Pachai, Chahin; Schwarz, Adam J.; Schuff, Norbert; Senjem, Matthew L.; Suhy, Joyce; Thompson, Paul M.; Weiner, Michael; Jack, Clifford R.
2013-01-01
The ADNI 3D T1-weighted MRI acquisitions provide a rich dataset for developing and testing analysis techniques for extracting structural endpoints. To promote greater rigor in analysis and meaningful comparison of different algorithms, the ADNI MRI Core has created standardized analysis sets of data comprising scans that met minimum quality control requirements. We encourage researchers to test and report their techniques against these data. Standard analysis sets of volumetric scans from ADNI-1 have been created, comprising: screening visits, 1 year completers (subjects who all have screening, 6 and 12 month scans), two year annual completers (screening, 1, and 2 year scans), two year completers (screening, 6 months, 1 year, 18 months (MCI only) and 2 years) and complete visits (screening, 6 months, 1 year, 18 months (MCI only), 2, and 3 year (normal and MCI only) scans). As the ADNI-GO/ADNI-2 data becomes available, updated standard analysis sets will be posted regularly. PMID:23110865
Baillie, Lesley; Thomas, Nicola
2018-01-01
Person-centred care is internationally recognised as best practice for the care of people with dementia. Personal information documents for people with dementia are proposed as a way to support person-centred care in healthcare settings. However, there is little research about how they are used in practice. The aim of this study was to analyse healthcare staff 's perceptions and experiences of using personal information documents, mainly Alzheimer's Society's 'This is me', for people with dementia in healthcare settings. The method comprised a secondary thematic analysis of data from a qualitative study, of how a dementia awareness initiative affected care for people with dementia in one healthcare organisation. The data were collected through 12 focus groups (n = 58 participants) and 1 individual interview, conducted with a range of healthcare staff, both clinical and non-clinical. There are four themes presented: understanding the rationale for personal information documents; completing personal information documents; location for personal information documents and transfer between settings; impact of personal information documents in practice. The findings illuminated how healthcare staff use personal information documents in practice in ways that support person-centred care. Practical issues about the use of personal information documents were revealed and these may affect the optimal use of the documents in practice. The study indicated the need to complete personal information documents at an early stage following diagnosis of dementia, and the importance of embedding their use across care settings, to support communication and integrated care.
NASA Astrophysics Data System (ADS)
Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie
2018-05-01
A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.
Ben Amor, Nadia; Hoyau, Sophie; Maynau, Daniel; Brenner, Valérie
2018-05-14
A benchmark set of relevant geometries of a model protein, the N-acetylphenylalanylamide, is presented to assess the validity of the approximate second-order coupled cluster (CC2) method in studying low-lying excited states of such bio-relevant systems. The studies comprise investigations of basis-set dependence as well as comparison with two multireference methods, the multistate complete active space 2nd order perturbation theory (MS-CASPT2) and the multireference difference dedicated configuration interaction (DDCI) methods. First of all, the applicability and the accuracy of the quasi-linear multireference difference dedicated configuration interaction method have been demonstrated on bio-relevant systems by comparison with the results obtained by the standard MS-CASPT2. Second, both the nature and excitation energy of the first low-lying excited state obtained at the CC2 level are very close to the Davidson corrected CAS+DDCI ones, the mean absolute deviation on the excitation energy being equal to 0.1 eV with a maximum of less than 0.2 eV. Finally, for the following low-lying excited states, if the nature is always well reproduced at the CC2 level, the differences on excitation energies become more important and can depend on the geometry.
Transient analysis mode participation for modal survey target mode selection using MSC/NASTRAN DMAP
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Ibrahim, Omar M.; Sullivan, Timothy L.; Goodnight, Thomas W.
1994-01-01
Many methods have been developed to aid analysts in identifying component modes which contribute significantly to component responses. These modes, typically targeted for dynamic model correlation via a modal survey, are known as target modes. Most methods used to identify target modes are based on component global dynamic behavior. It is sometimes unclear if these methods identify all modes contributing to responses important to the analyst. These responses are usually those in areas of hardware design concerns. One method used to check the completeness of target mode sets and identify modes contributing significantly to important component responses is mode participation. With this method, the participation of component modes in dynamic responses is quantified. Those modes which have high participation are likely modal survey target modes. Mode participation is most beneficial when it is used with responses from analyses simulating actual flight events. For spacecraft, these responses are generated via a structural dynamic coupled loads analysis. Using MSC/NASTRAN DMAP, a method has been developed for calculating mode participation based on transient coupled loads analysis results. The algorithm has been implemented to be compatible with an existing coupled loads methodology and has been used successfully to develop a set of modal survey target modes.
1986-09-18
physical and administrativo security tech- These methods are, on the whole, at an early 163 • -- I I " • I I I II I ,I U niques. As in many other areas...o,t) I o member-of 0, t member-of (data;files;pgmstsubjects;i/o T and o maps-completely-to t) devices) S :- set of all subjects ( procesoes ;pgms
Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.
1984-02-01
a r-re complete set of equations is used, but their effect is imposed by means of a right hand side forcing function, not by means of a zonal boundary...modifications of flow-simulation algorithms The explicit finite-difference code of Magnus and are discussed. Computational tests in two dimensions...used to simplify the task of grid generation without an adverse achieve computational efficiency. More recently, effect on flow-field algorithms and
Development of an improved method of consolidating fatigue life data
NASA Technical Reports Server (NTRS)
Leis, B. N.; Sampath, S. G.
1978-01-01
A fatigue data consolidation model that incorporates recent advances in life prediction methodology was developed. A combined analytic and experimental study of fatigue of notched 2024-T3 aluminum alloy under constant amplitude loading was carried out. Because few systematic and complete data sets for 2024-T3 were available in the program generated data for fatigue crack initiation and separation failure for both zero and nonzero mean stresses. Consolidations of these data are presented.
Diagonalizing Tensor Covariants, Light-Cone Commutators, and Sum Rules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo, C. Y.
We derive fixed-mass sum rules for virtual Compton scattering the forward direction. We use the methods of both Dicus, Jackiw, and Teplitz (for the absorptive parts) and Heimann, Hey, and Mandula (for the real parts). We find a set of tensor covariansa such that the corresponding scalar amplitudes are proportional to simple t-channel parity-conserving helicity amplitudes. We give a relatively complete discussion of the convergence of the sum rules in a Regge model. (auth)
ERIC Educational Resources Information Center
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana
2014-01-01
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Current Methods for Evaluation of Physical Security System Effectiveness.
1981-05-01
It also helps the user modify a data set before further processing. (c) Safeguards Engineering and Analysis Data Base (SEAD)--To complete SAFE’s...graphic display software in addition to a Fortran compiler, and up to about (3 35,000 words of storage. For a fairly complex problem, a single run through...operational software . 94 BIBLIOGRAPHY Lenz, J.E., "The PROSE (Protection System Evaluator) Model," Proc. 1979 Winter Simulation Conference, IEEE, 1979
Mechanics Methodology for Textile Preform Composite Materials
NASA Technical Reports Server (NTRS)
Poe, Clarence C., Jr.
1996-01-01
NASA and its contractors have completed a program to develop a basic mechanics underpinning for textile composites. Three major deliverables were produced by the program: 1. A set of test methods for measuring material properties and design allowables; 2. Mechanics models to predict the effects of the fiber preform architecture and constituent properties on engineering moduli, strength, damage resistance, and fatigue life; and 3. An electronic data base of coupon type test data. This report describes these three deliverables.
Yuen, Po Ki; DeRosa, Michael E
2011-10-07
This article presents a simple, low-cost method of fabrication and the applications of flexible polystyrene microfluidic devices with three-dimensional (3D) interconnected microporous walls based on treatment using a solvent/non-solvent mixture at room temperature. The complete fabrication process from device design concept to working device can be completed in less than an hour in a regular laboratory setting, without the need for expensive equipment. Microfluidic devices were used to demonstrate gas generation and absorption reactions by acidifying water with carbon dioxide (CO(2)) gas. By selectively treating the microporous structures with oxygen plasma, acidification of water by acetic acid (distilled white vinegar) perfusion was also demonstrated with the same device design.
NASA Astrophysics Data System (ADS)
Schlueter-Kuck, Kristy; Dabiri, John
2017-11-01
In recent years, there has been a proliferation of techniques that aim to characterize fluid flow kinematics on the basis of Lagrangian trajectories of collections of tracer particles. Most of these techniques depend on presence of tracer particles that are initially closely-spaced, in order to compute local gradients of their trajectories. In many applications, the requirement of close tracer spacing cannot be satisfied, especially when the tracers are naturally occurring and their distribution is dictated by the underlying flow. Moreover, current methods often focus on determination of the boundaries of coherent sets, whereas in practice it is often valuable to identify the complete set of trajectories that are coherent with an individual trajectory of interest. We extend the concept of Coherent Structure Coloring to achieve identification of the coherent set associated with individual Lagrangian trajectories. This algorithm is proven successful in identifying coherent structures of varying complexities in canonical unsteady flows. Importantly, although the method is demonstrated here in the context of fluid flow kinematics, the generality of the approach allows for its potential application to other unsupervised clustering problems in dynamical systems. This work was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
[The Development of Quality Indicators for Management of Patients with ADHD in Social Paediatrics].
Skrundz, M; Borusiak, P; Hameister, K A; Geraedts, M
2015-12-01
Attention deficit/hyperactivity disorder (ADHD) with an estimated prevalence of 5% and its increased risk for comorbidities is of significant relevance for the health care system and is as well of socio-political significance. There is a lack of established methods for the evaluation of the diagnostic and therapeutic treatment of the patients. In this study, we have developed a set of evidence- and consensus-based meaningful indicators for the treatment of children with ADHD. Following a thorough examination of the literature and published Guidelines, a first set of 90 quality indicators was created after redundancy reduction and addition of newly developed indicators. The further development of the indicator set was based on a modified version of the 2-step RAND/UCLA expert evaluation method. After assessment in 2 rounds of ratings, a set of 39 homogeneously positively rated indicators was established. 28 indicators apply to the quality of the diagnostic and therapeutic process, 4 to structural conditions and 3 rely on outcome. This is the first study covering the aspect of quality measurement in children with developmental disorders, especially ADHD. For the next step a pilot evaluation is necessary to complete the evaluation of the quality indicators. © Georg Thieme Verlag KG Stuttgart · New York.
Psychogios, Nikolaos; Hau, David D.; Peng, Jun; Guo, An Chi; Mandal, Rupasri; Bouatra, Souhaila; Sinelnikov, Igor; Krishnamurthy, Ramanarayan; Eisner, Roman; Gautam, Bijaya; Young, Nelson; Xia, Jianguo; Knox, Craig; Dong, Edison; Huang, Paul; Hollander, Zsuzsanna; Pedersen, Theresa L.; Smith, Steven R.; Bamforth, Fiona; Greiner, Russ; McManus, Bruce; Newman, John W.; Goodfriend, Theodore; Wishart, David S.
2011-01-01
Continuing improvements in analytical technology along with an increased interest in performing comprehensive, quantitative metabolic profiling, is leading to increased interest pressures within the metabolomics community to develop centralized metabolite reference resources for certain clinically important biofluids, such as cerebrospinal fluid, urine and blood. As part of an ongoing effort to systematically characterize the human metabolome through the Human Metabolome Project, we have undertaken the task of characterizing the human serum metabolome. In doing so, we have combined targeted and non-targeted NMR, GC-MS and LC-MS methods with computer-aided literature mining to identify and quantify a comprehensive, if not absolutely complete, set of metabolites commonly detected and quantified (with today's technology) in the human serum metabolome. Our use of multiple metabolomics platforms and technologies allowed us to substantially enhance the level of metabolome coverage while critically assessing the relative strengths and weaknesses of these platforms or technologies. Tables containing the complete set of 4229 confirmed and highly probable human serum compounds, their concentrations, related literature references and links to their known disease associations are freely available at http://www.serummetabolome.ca. PMID:21359215
Bridging meso- and microscopic anisotropic unilateral damage formulations for microcracked solids
NASA Astrophysics Data System (ADS)
Zhu, Qi-Zhi; Yuan, Shuang-Shuang; Shao, Jian-fu
2017-04-01
A mathematically consistent and unified description of induced anisotropy and unilateral effects constitutes one of the central tasks in the continuum damage theories developed so far. This paper aims at bridging constitutive damage formulations on meso- and micro-scales with an emphasis on a complete mesoscopic determination of material effective properties for microcracked solids. The key is to introduce a new set of invariants in terms of strain tensor and fabric tensor by making use of the Walpole's tensorial base. This invariant set proves to be equivalent to the classical one, while the new one provides great conveniences to high-order orientation-dependent tensor manipulations. When limited to the case of parallel microcracks, potential relations between ten combination coefficients are established by applying continuity conditions. It is found that the dilute approximation with penny-shaped microcracks is a particular case of the present one. By originally introducing effective strain effect, interactions between microcracks are taken into account with comparison to the Mori-Tanaka method as well as the Ponte-Castaneda and Willis scheme. For completeness, discussions are also addressed on macroscopic formulations with high-order damage variables.
Taylor, Sandra L; Ruhaak, L Renee; Kelly, Karen; Weiss, Robert H; Kim, Kyoungmi
2017-03-01
With expanded access to, and decreased costs of, mass spectrometry, investigators are collecting and analyzing multiple biological matrices from the same subject such as serum, plasma, tissue and urine to enhance biomarker discoveries, understanding of disease processes and identification of therapeutic targets. Commonly, each biological matrix is analyzed separately, but multivariate methods such as MANOVAs that combine information from multiple biological matrices are potentially more powerful. However, mass spectrometric data typically contain large amounts of missing values, and imputation is often used to create complete data sets for analysis. The effects of imputation on multiple biological matrix analyses have not been studied. We investigated the effects of seven imputation methods (half minimum substitution, mean substitution, k-nearest neighbors, local least squares regression, Bayesian principal components analysis, singular value decomposition and random forest), on the within-subject correlation of compounds between biological matrices and its consequences on MANOVA results. Through analysis of three real omics data sets and simulation studies, we found the amount of missing data and imputation method to substantially change the between-matrix correlation structure. The magnitude of the correlations was generally reduced in imputed data sets, and this effect increased with the amount of missing data. Significant results from MANOVA testing also were substantially affected. In particular, the number of false positives increased with the level of missing data for all imputation methods. No one imputation method was universally the best, but the simple substitution methods (Half Minimum and Mean) consistently performed poorly. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Goerigk, Lars; Grimme, Stefan
2010-05-01
We present an extension of our previously published benchmark set for low-lying valence transitions of large organic dyes [L. Goerigk et al., Phys. Chem. Chem. Phys. 11, 4611 (2009)]. The new set comprises in total 12 molecules, including two charged species and one with a clear charge-transfer transition. Our previous study on TD-DFT methods is repeated for the new test set with a larger basis set. Additionally, we want to shed light on different spin-scaled variants of the configuration interaction singles with perturbative doubles correction [CIS(D)] and the approximate coupled cluster singles and doubles method (CC2). Particularly for CIS(D) we want to clarify, which of the proposed versions can be recommended. Our results indicate that an unpublished SCS-CIS(D) variant, which is implemented into the TURBOMOLE program package, shows worse results than the original CIS(D) method, while other modified versions perform better. An SCS-CIS(D) version with a parameterization, that has already been used in an application by us recently [L. Goerigk and S. Grimme, ChemPhysChem 9, 2467 (2008)], yields the best results. Another SCS-CIS(D) version and the SOS-CIS(D) method [Y. M. Rhee and M. Head-Gordon, J. Phys. Chem. A 111, 5314 (2007)] perform very similar, though. For the electronic transitions considered herein, there is no improvement observed when going from the original CC2 to the SCS-CC2 method but further adjustment of the latter seems to be beneficial. Double-hybrid density functionals belong to best methods tested here. Particularly B2GP-PLYP provides uniformly good results for the complete set and is considered to be close to chemical accuracy within an ab initio theory of color. For conventional hybrid functionals, a Fock-exchange mixing parameter of about 0.4 seems to be optimum in TD-DFT treatments of large chromophores. A range-separated functional such as, e.g., CAM-B3LYP seems also to be promising.
Smith, A Russell; Cavanaugh, Cathy; Moore, W Allen
2011-06-21
Educators in allied health and medical education programs utilize instructional multimedia to facilitate psychomotor skill acquisition in students. This study examines the effects of instructional multimedia on student and instructor attitudes and student study behavior. Subjects consisted of 45 student physical therapists from two universities. Two skill sets were taught during the course of the study. Skill set one consisted of knee examination techniques and skill set two consisted of ankle/foot examination techniques. For each skill set, subjects were randomly assigned to either a control group or an experimental group. The control group was taught with live demonstration of the examination skills, while the experimental group was taught using multimedia. A cross-over design was utilized so that subjects in the control group for skill set one served as the experimental group for skill set two, and vice versa. During the last week of the study, students and instructors completed written questionnaires to assess attitude toward teaching methods, and students answered questions regarding study behavior. There were no differences between the two instructional groups in attitudes, but students in the experimental group for skill set two reported greater study time alone compared to other groups. Multimedia provides an efficient method to teach psychomotor skills to students entering the health professions. Both students and instructors identified advantages and disadvantages for both instructional techniques. Reponses relative to instructional multimedia emphasized efficiency, processing level, autonomy, and detail of instruction compared to live presentation. Students and instructors identified conflicting views of instructional detail and control of the content.
In Pursuit of Change: Youth Response to Intensive Goal Setting Embedded in a Serious Video Game
Thompson, Debbe; Baranowski, Tom; Buday, Richard; Baranowski, Janice; Juliano, Melissa; Frazior, McKee; Wilsdon, Jon; Jago, Russell
2007-01-01
Background Type 2 diabetes has increased in prevalence among youth, paralleling the increase in pediatric obesity. Helping youth achieve energy balance by changing diet and physical activity behaviors should decrease the risk for type 2 diabetes and obesity. Goal setting and goal review are critical components of behavior change. Theory-informed video games that emphasize development and refinement of goal setting and goal review skills provide a method for achieving energy balance in an informative, entertaining format. This article reports alpha-testing results of early versions of theory-informed goal setting and reviews components of two diabetes and obesity prevention video games for preadolescents. Method Two episodes each of two video games were alpha tested with 9- to 11-year-old youth from multiple ethnic groups. Alpha testing included observed game play followed by a scripted interview. The staff was trained in observation and interview techniques prior to data collection. Results Although some difficulties were encountered, alpha testers generally understood goal setting and review components and comprehended they were setting personal goals. Although goal setting and review involved multiple steps, youth were generally able to complete them quickly, with minimal difficulty. Few technical issues arose; however, several usability and comprehension problems were identified. Conclusions Theory-informed video games may be an effective medium for promoting youth diabetes and obesity prevention. Alpha testing helps identify problems likely to have a negative effect on functionality, usability, and comprehension during development, thereby providing an opportunity to correct these issues prior to final production. PMID:19885165
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
Neural net applied to anthropological material: a methodical study on the human nasal skeleton.
Prescher, Andreas; Meyers, Anne; Gerf von Keyserlingk, Diedrich
2005-07-01
A new information processing method, an artificial neural net, was applied to characterise the variability of anthropological features of the human nasal skeleton. The aim was to find different types of nasal skeletons. A neural net with 15*15 nodes was trained by 17 standard anthropological parameters taken from 184 skulls of the Aachen collection. The trained neural net delivers its classification in a two-dimensional map. Different types of noses were locally separated within the map. Rare and frequent types may be distinguished after one passage of the complete collection through the net. Statistical descriptive analysis, hierarchical cluster analysis, and discriminant analysis were applied to the same data set. These parallel applications allowed comparison of the new approach to the more traditional ones. In general the classification by the neural net is in correspondence with cluster analysis and discriminant analysis. However, it goes beyond these classifications because of the possibility of differentiating the types in multi-dimensional dependencies. Furthermore, places in the map are kept blank for intermediate forms, which may be theoretically expected, but were not included in the training set. In conclusion, the application of a neural network is a suitable method for investigating large collections of biological material. The gained classification may be helpful in anatomy and anthropology as well as in forensic medicine. It may be used to characterise the peculiarity of a whole set as well as to find particular cases within the set.
Arrays of probes for positional sequencing by hybridization
Cantor, Charles R [Boston, MA; Prezetakiewiczr, Marek [East Boston, MA; Smith, Cassandra L [Boston, MA; Sano, Takeshi [Waltham, MA
2008-01-15
This invention is directed to methods and reagents useful for sequencing nucleic acid targets utilizing sequencing by hybridization technology comprising probes, arrays of probes and methods whereby sequence information is obtained rapidly and efficiently in discrete packages. That information can be used for the detection, identification, purification and complete or partial sequencing of a particular target nucleic acid. When coupled with a ligation step, these methods can be performed under a single set of hybridization conditions. The invention also relates to the replication of probe arrays and methods for making and replicating arrays of probes which are useful for the large scale manufacture of diagnostic aids used to screen biological samples for specific target sequences. Arrays created using PCR technology may comprise probes with 5'- and/or 3'-overhangs.
Time-reversal symmetric resolution of unity without background integrals in open quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatano, Naomichi, E-mail: hatano@iis.u-tokyo.ac.jp; Ordonez, Gonzalo, E-mail: gordonez@butler.edu
2014-12-15
We present a new complete set of states for a class of open quantum systems, to be used in expansion of the Green’s function and the time-evolution operator. A remarkable feature of the complete set is that it observes time-reversal symmetry in the sense that it contains decaying states (resonant states) and growing states (anti-resonant states) parallelly. We can thereby pinpoint the occurrence of the breaking of time-reversal symmetry at the choice of whether we solve Schrödinger equation as an initial-condition problem or a terminal-condition problem. Another feature of the complete set is that in the subspace of the centralmore » scattering area of the system, it consists of contributions of all states with point spectra but does not contain any background integrals. In computing the time evolution, we can clearly see contribution of which point spectrum produces which time dependence. In the whole infinite state space, the complete set does contain an integral but it is over unperturbed eigenstates of the environmental area of the system and hence can be calculated analytically. We demonstrate the usefulness of the complete set by computing explicitly the survival probability and the escaping probability as well as the dynamics of wave packets. The origin of each term of matrix elements is clear in our formulation, particularly, the exponential decays due to the resonance poles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So; Yanai, Takeshi; De Jong, Wibe A.
Coupled-cluster methods including through and up to the connected single, double, triple, and quadruple substitutions (CCSD, CCSDT, and CCSDTQ) have been automatically derived and implemented for sequential and parallel executions for use in conjunction with a one-component third-order Douglas-Kroll (DK3) approximation for relativistic corrections. A combination of the converging electron-correlation methods, the accurate relativistic reference wave functions, and the use of systematic basis sets tailored to the relativistic approximation has been shown to predict the experimental singlet-triplet separations within 0.02 eV (0.5 kcal/mol) for five triatomic hydrides (CH2, NH2+, SiH2, PH2+, and AsH2+), the experimental bond lengths within 0.002 angstroms,more » rotational constants within 0.02 cm-1, vibration-rotation constants within 0.01 cm-1, centrifugal distortion constants within 2 %, harmonic vibration frequencies within 9 cm-1 (0.4 %), anharmonic vibrational constants within 2 cm-1, and dissociation energies within 0.03 eV (0.8 kcal/mol) for twenty diatomic hydrides (BH, CH, NH, OH, FH, AlH, SiH, PH, SH, ClH, GaH, GeH, AsH, SeH, BrH, InH, SnH, SbH, TeH, and IH) containing main-group elements across the second through fifth periods of the periodic table. In these calculations, spin-orbit effects on dissociation energies, which were assumed to be additive, were estimated from the measured spin-orbit coupling constants of atoms and diatomic molecules, and an electronic energy in the complete-basis-set, complete-electron-correlation limit has been extrapolated by the formula which was in turn based on the exponential-Gaussian extrapolation formula of the basis set dependence.« less
Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model
Lu, Wei; Song, Jiangning; Akutsu, Tatsuya
2015-01-01
Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.; Wolfe, Elie
2018-06-01
Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.
Midbond basis functions for weakly bound complexes
NASA Astrophysics Data System (ADS)
Shaw, Robert A.; Hill, J. Grant
2018-06-01
Weakly bound systems present a difficult problem for conventional atom-centred basis sets due to large separations, necessitating the use of large, computationally expensive bases. This can be remedied by placing a small number of functions in the region between molecules in the complex. We present compact sets of optimised midbond functions for a range of complexes involving noble gases, alkali metals and small molecules for use in high accuracy coupled -cluster calculations, along with a more robust procedure for their optimisation. It is shown that excellent results are possible with double-zeta quality orbital basis sets when a few midbond functions are added, improving both the interaction energy and the equilibrium bond lengths of a series of noble gas dimers by 47% and 8%, respectively. When used in conjunction with explicitly correlated methods, near complete basis set limit accuracy is readily achievable at a fraction of the cost that using a large basis would entail. General purpose auxiliary sets are developed to allow explicitly correlated midbond function studies to be carried out, making it feasible to perform very high accuracy calculations on weakly bound complexes.
NASA Astrophysics Data System (ADS)
Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.
2014-08-01
We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.
Creating an effective poster presentation.
Taggart, H M; Arslanian, C
2000-01-01
One way to build knowledge in nursing is to share research findings or clinical program outcomes. The dissemination of these findings is often a difficult final step in a project that has taken months or years to complete. One method of sharing findings in a relaxed and informal setting is a poster presentation. This method is an effective form for presenting findings using an interactive approach. The milieu of a poster presentation enables the presenters to interact and dialogue with colleagues. Guidelines for size and format require that the poster is clear and informative. Application of design helps to create visually appealing posters. This article summarizes elements of designing and conducting a poster presentation.
Texture analysis of pulmonary parenchyma in normal and emphysematous lung
NASA Astrophysics Data System (ADS)
Uppaluri, Renuka; Mitsa, Theophano; Hoffman, Eric A.; McLennan, Geoffrey; Sonka, Milan
1996-04-01
Tissue characterization using texture analysis is gaining increasing importance in medical imaging. We present a completely automated method for discriminating between normal and emphysematous regions from CT images. This method involves extracting seventeen features which are based on statistical, hybrid and fractal texture models. The best subset of features is derived from the training set using the divergence technique. A minimum distance classifier is used to classify the samples into one of the two classes--normal and emphysema. Sensitivity and specificity and accuracy values achieved were 80% or greater in most cases proving that texture analysis holds great promise in identifying emphysema.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
New high resolution Random Telegraph Noise (RTN) characterization method for resistive RAM
NASA Astrophysics Data System (ADS)
Maestro, M.; Diaz, J.; Crespo-Yepes, A.; Gonzalez, M. B.; Martin-Martinez, J.; Rodriguez, R.; Nafria, M.; Campabadal, F.; Aymerich, X.
2016-01-01
Random Telegraph Noise (RTN) is one of the main reliability problems of resistive switching-based memories. To understand the physics behind RTN, a complete and accurate RTN characterization is required. The standard equipment used to analyse RTN has a typical time resolution of ∼2 ms which prevents evaluating fast phenomena. In this work, a new RTN measurement procedure, which increases the measurement time resolution to 2 μs, is proposed. The experimental set-up, together with the recently proposed Weighted Time Lag (W-LT) method for the analysis of RTN signals, allows obtaining a more detailed and precise information about the RTN phenomenon.
Can we (control) Engineer the degree learning process?
NASA Astrophysics Data System (ADS)
White, A. S.; Censlive, M.; Neilsen, D.
2014-07-01
This paper investigates how control theory could be applied to learning processes in engineering education. The initial point for the analysis is White's Double Loop learning model of human automation control modified for the education process where a set of governing principals is chosen, probably by the course designer. After initial training the student decides unknowingly on a mental map or model. After observing how the real world is behaving, a strategy to achieve the governing variables is chosen and a set of actions chosen. This may not be a conscious operation, it maybe completely instinctive. These actions will cause some consequences but not until a certain time delay. The current model is compared with the work of Hollenbeck on goal setting, Nelson's model of self-regulation and that of Abdulwahed, Nagy and Blanchard at Loughborough who investigated control methods applied to the learning process.
Chang, Hui-Chin; Wang, Ning-Yen; Ko, Wen-Ru; Yu, You-Tsz; Lin, Long-Yau; Tsai, Hui-Fang
2017-06-01
The effective education method of medico-jurisprudence for medical students is unclear. The study was designed to evaluate the effectiveness of problem-based learning (PBL) model teaching medico-jurisprudence in clinical setting on General Law Knowledge (GLK) for medical students. Senior medical students attending either campus-based law curriculum or Obstetrics/Gynecology (Ob/Gyn) clinical setting morning meeting from February to July in 2015 were enrolled. A validated questionnaire comprising 45 questions were completed before and after the law education. The interns attending clinical setting small group improvisation medico-jurisprudence problem-based learning education had significantly better GLK scores than the GLK of students attending campus-based medical law education course after the period studied. PBL teaching model of medico-jurisprudence is an ideal alternative pedagogy model in medical law education curriculum. Copyright © 2017. Published by Elsevier B.V.
Dynamic Modelling Of A SCARA Robot
NASA Astrophysics Data System (ADS)
Turiel, J. Perez; Calleja, R. Grossi; Diez, V. Gutierrez
1987-10-01
This paper describes a method for modelling industrial robots that considers dynamic approach to manipulation systems motion generation, obtaining the complete dynamic model for the mechanic part of the robot and taking into account the dynamic effect of actuators acting at the joints. For a four degree of freedom SCARA robot we obtain the dynamic model for the basic (minimal) configuration, that is, the three degrees of freedom that allow us to place the robot end effector in a desired point, using the Lagrange Method to obtain the dynamic equations in matrix form. The manipulator is considered to be a set of rigid bodies inter-connected by joints in the form of simple kinematic pairs. Then, the state space model is obtained for the actuators that move the robot joints, uniting the models of the single actuators, that is, two DC permanent magnet servomotors and an electrohydraulic actuator. Finally, using a computer simulation program written in FORTRAN language, we can compute the matrices of the complete model.
The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu
2016-07-01
Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.
The HIV care cascade: a systematic review of data sources, methodology and comparability.
Medland, Nicholas A; McMahon, James H; Chow, Eric P F; Elliott, Julian H; Hoy, Jennifer F; Fairley, Christopher K
2015-01-01
The cascade of HIV diagnosis, care and treatment (HIV care cascade) is increasingly used to direct and evaluate interventions to increase population antiretroviral therapy (ART) coverage, a key component of treatment as prevention. The ability to compare cascades over time, sub-population, jurisdiction or country is important. However, differences in data sources and methodology used to construct the HIV care cascade might limit its comparability and ultimately its utility. Our aim was to review systematically the different methods used to estimate and report the HIV care cascade and their comparability. A search of published and unpublished literature through March 2015 was conducted. Cascades that reported the continuum of care from diagnosis to virological suppression in a demographically definable population were included. Data sources and methods of measurement or estimation were extracted. We defined the most comparable cascade elements as those that directly measured diagnosis or care from a population-based data set. Thirteen reports were included after screening 1631 records. The undiagnosed HIV-infected population was reported in seven cascades, each of which used different data sets and methods and could not be considered to be comparable. All 13 used mandatory HIV diagnosis notification systems to measure the diagnosed population. Population-based data sets, derived from clinical data or mandatory reporting of CD4 cell counts and viral load tests from all individuals, were used in 6 of 12 cascades reporting linkage, 6 of 13 reporting retention, 3 of 11 reporting ART and 6 of 13 cascades reporting virological suppression. Cascades with access to population-based data sets were able to directly measure cascade elements and are therefore comparable over time, place and sub-population. Other data sources and methods are less comparable. To ensure comparability, countries wishing to accurately measure the cascade should utilize complete population-based data sets from clinical data from elements of a centralized healthcare setting, where available, or mandatory CD4 cell count and viral load test result reporting. Additionally, virological suppression should be presented both as percentage of diagnosed and percentage of estimated total HIV-infected population, until methods to calculate the latter have been standardized.
Predicting the helix packing of globular proteins by self-correcting distance geometry.
Mumenthaler, C; Braun, W
1995-05-01
A new self-correcting distance geometry method for predicting the three-dimensional structure of small globular proteins was assessed with a test set of 8 helical proteins. With the knowledge of the amino acid sequence and the helical segments, our completely automated method calculated the correct backbone topology of six proteins. The accuracy of the predicted structures ranged from 2.3 A to 3.1 A for the helical segments compared to the experimentally determined structures. For two proteins, the predicted constraints were not restrictive enough to yield a conclusive prediction. The method can be applied to all small globular proteins, provided the secondary structure is known from NMR analysis or can be predicted with high reliability.
An improved K-means clustering method for cDNA microarray image segmentation.
Wang, T N; Li, T J; Shao, G F; Wu, S X
2015-07-14
Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.
1993-01-01
The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.
Completing the Results of the 2013 Boston Marathon
Hammerling, Dorit; Cefalu, Matthew; Cisewski, Jessi; Dominici, Francesca; Parmigiani, Giovanni; Paulson, Charles; Smith, Richard L.
2014-01-01
The 2013 Boston marathon was disrupted by two bombs placed near the finish line. The bombs resulted in three deaths and several hundred injuries. Of lesser concern, in the immediate aftermath, was the fact that nearly 6,000 runners failed to finish the race. We were approached by the marathon's organizers, the Boston Athletic Association (BAA), and asked to recommend a procedure for projecting finish times for the runners who could not complete the race. With assistance from the BAA, we created a dataset consisting of all the runners in the 2013 race who reached the halfway point but failed to finish, as well as all runners from the 2010 and 2011 Boston marathons. The data consist of split times from each of the 5 km sections of the course, as well as the final 2.2 km (from 40 km to the finish). The statistical objective is to predict the missing split times for the runners who failed to finish in 2013. We set this problem in the context of the matrix completion problem, examples of which include imputing missing data in DNA microarray experiments, and the Netflix prize problem. We propose five prediction methods and create a validation dataset to measure their performance by mean squared error and other measures. The best method used local regression based on a K-nearest-neighbors algorithm (KNN method), though several other methods produced results of similar quality. We show how the results were used to create projected times for the 2013 runners and discuss potential for future application of the same methodology. We present the whole project as an example of reproducible research, in that we are able to make the full data and all the algorithms we have used publicly available, which may facilitate future research extending the methods or proposing completely different approaches. PMID:24727904
Completing the results of the 2013 Boston marathon.
Hammerling, Dorit; Cefalu, Matthew; Cisewski, Jessi; Dominici, Francesca; Parmigiani, Giovanni; Paulson, Charles; Smith, Richard L
2014-01-01
The 2013 Boston marathon was disrupted by two bombs placed near the finish line. The bombs resulted in three deaths and several hundred injuries. Of lesser concern, in the immediate aftermath, was the fact that nearly 6,000 runners failed to finish the race. We were approached by the marathon's organizers, the Boston Athletic Association (BAA), and asked to recommend a procedure for projecting finish times for the runners who could not complete the race. With assistance from the BAA, we created a dataset consisting of all the runners in the 2013 race who reached the halfway point but failed to finish, as well as all runners from the 2010 and 2011 Boston marathons. The data consist of split times from each of the 5 km sections of the course, as well as the final 2.2 km (from 40 km to the finish). The statistical objective is to predict the missing split times for the runners who failed to finish in 2013. We set this problem in the context of the matrix completion problem, examples of which include imputing missing data in DNA microarray experiments, and the Netflix prize problem. We propose five prediction methods and create a validation dataset to measure their performance by mean squared error and other measures. The best method used local regression based on a K-nearest-neighbors algorithm (KNN method), though several other methods produced results of similar quality. We show how the results were used to create projected times for the 2013 runners and discuss potential for future application of the same methodology. We present the whole project as an example of reproducible research, in that we are able to make the full data and all the algorithms we have used publicly available, which may facilitate future research extending the methods or proposing completely different approaches.
Olson Order of Quantum Observables
NASA Astrophysics Data System (ADS)
Dvurečenskij, Anatolij
2016-11-01
M.P. Olson, Proc. Am. Math. Soc. 28, 537-544 (1971) showed that the system of effect operators of the Hilbert space can be ordered by the so-called spectral order such that the system of effect operators is a complete lattice. Using his ideas, we introduce a partial order, called the Olson order, on the set of bounded observables of a complete lattice effect algebra. We show that the set of bounded observables is a Dedekind complete lattice.
Water adsorption on a copper formate paddlewheel model of CuBTC: A comparative MP2 and DFT study
NASA Astrophysics Data System (ADS)
Toda, Jordi; Fischer, Michael; Jorge, Miguel; Gomes, José R. B.
2013-11-01
Simultaneous adsorption of two water molecules on open metal sites of the HKUST-1 metal-organic framework (MOF), modeled with a Cu2(HCOO)4 cluster, was studied by means of density functional theory (DFT) and second-order Moller-Plesset (MP2) approaches together with correlation consistent basis sets. Experimental geometries and MP2 energetic data extrapolated to the complete basis set limit were used as benchmarks for testing the accuracy of several different exchange-correlation functionals in the correct description of the water-MOF interaction. M06-L and some LC-DFT methods arise as the most appropriate in terms of the quality of geometrical data, energetic data and computational resources needed.
A Correlated Ab Initio Study of Linear Carbon-Chain Radicals C(sub n)H (n=2-7)
NASA Technical Reports Server (NTRS)
Woon, David E.
1995-01-01
Linear carbon-chain radicals C(sub n) H for n = 2-7 have been studied with correlation consistent valence and core-valence basis sets and the coupled cluster method RCCSD(T). Equilibrium structures, rotational constants, and dipole moments are reported and compared with available experimental data. The ground state of the even-n series changes from 2Sigma(+) to 2Pi as the chain is extended. For C4H, the 2Sigma(+) state was found to lie only 72 cm(exp -1) below the 2Pi state in the estimated complete basis set limit for valence correlation. The C2H(-) and C3H(-) anions have also been characterized.
Delimiting Coalescence Genes (C-Genes) in Phylogenomic Data Sets.
Springer, Mark S; Gatesy, John
2018-02-26
coalescence methods have emerged as a popular alternative for inferring species trees with large genomic datasets, because these methods explicitly account for incomplete lineage sorting. However, statistical consistency of summary coalescence methods is not guaranteed unless several model assumptions are true, including the critical assumption that recombination occurs freely among but not within coalescence genes (c-genes), which are the fundamental units of analysis for these methods. Each c-gene has a single branching history, and large sets of these independent gene histories should be the input for genome-scale coalescence estimates of phylogeny. By contrast, numerous studies have reported the results of coalescence analyses in which complete protein-coding sequences are treated as c-genes even though exons for these loci can span more than a megabase of DNA. Empirical estimates of recombination breakpoints suggest that c-genes may be much shorter, especially when large clades with many species are the focus of analysis. Although this idea has been challenged recently in the literature, the inverse relationship between c-gene size and increased taxon sampling in a dataset-the 'recombination ratchet'-is a fundamental property of c-genes. For taxonomic groups characterized by genes with long intron sequences, complete protein-coding sequences are likely not valid c-genes and are inappropriate units of analysis for summary coalescence methods unless they occur in recombination deserts that are devoid of incomplete lineage sorting (ILS). Finally, it has been argued that coalescence methods are robust when the no-recombination within loci assumption is violated, but recombination must matter at some scale because ILS, a by-product of recombination, is the raison d'etre for coalescence methods. That is, extensive recombination is required to yield the large number of independently segregating c-genes used to infer a species tree. If coalescent methods are powerful enough to infer the correct species tree for difficult phylogenetic problems in the anomaly zone, where concatenation is expected to fail because of ILS, then there should be a decreasing probability of inferring the correct species tree using longer loci with many intralocus recombination breakpoints (i.e., increased levels of concatenation).
Delimiting Coalescence Genes (C-Genes) in Phylogenomic Data Sets
Springer, Mark S.; Gatesy, John
2018-01-01
Summary coalescence methods have emerged as a popular alternative for inferring species trees with large genomic datasets, because these methods explicitly account for incomplete lineage sorting. However, statistical consistency of summary coalescence methods is not guaranteed unless several model assumptions are true, including the critical assumption that recombination occurs freely among but not within coalescence genes (c-genes), which are the fundamental units of analysis for these methods. Each c-gene has a single branching history, and large sets of these independent gene histories should be the input for genome-scale coalescence estimates of phylogeny. By contrast, numerous studies have reported the results of coalescence analyses in which complete protein-coding sequences are treated as c-genes even though exons for these loci can span more than a megabase of DNA. Empirical estimates of recombination breakpoints suggest that c-genes may be much shorter, especially when large clades with many species are the focus of analysis. Although this idea has been challenged recently in the literature, the inverse relationship between c-gene size and increased taxon sampling in a dataset—the ‘recombination ratchet’—is a fundamental property of c-genes. For taxonomic groups characterized by genes with long intron sequences, complete protein-coding sequences are likely not valid c-genes and are inappropriate units of analysis for summary coalescence methods unless they occur in recombination deserts that are devoid of incomplete lineage sorting (ILS). Finally, it has been argued that coalescence methods are robust when the no-recombination within loci assumption is violated, but recombination must matter at some scale because ILS, a by-product of recombination, is the raison d’etre for coalescence methods. That is, extensive recombination is required to yield the large number of independently segregating c-genes used to infer a species tree. If coalescent methods are powerful enough to infer the correct species tree for difficult phylogenetic problems in the anomaly zone, where concatenation is expected to fail because of ILS, then there should be a decreasing probability of inferring the correct species tree using longer loci with many intralocus recombination breakpoints (i.e., increased levels of concatenation). PMID:29495400
The treatment of medial tibial stress syndrome in athletes; a randomized clinical trial
2012-01-01
Background The only three randomized trials on the treatment of MTSS were all performed in military populations. The treatment options investigated in this study were not previously examined in athletes. This study investigated if functional outcome of three common treatment options for medial tibial stress syndrome (MTSS) in athletes in a non-military setting was the same. Methods The study design was randomized and multi-centered. Physical therapists and sports physicians referred athletes with MTSS to the hospital for inclusion. 81 athletes were assessed for eligibility of which 74 athletes were included and randomized to three treatment groups. Group one performed a graded running program, group two performed a graded running program with additional stretching and strengthening exercises for the calves, while group three performed a graded running program with an additional sports compression stocking. The primary outcome measure was: time to complete a running program (able to run 18 minutes with high intensity) and secondary outcome was: general satisfaction with treatment. Results 74 Athletes were randomized and included of which 14 did not complete the study due a lack of progress (18.9%). The data was analyzed on an intention-to-treat basis. Time to complete a running program and general satisfaction with the treatment were not significantly different between the three treatment groups. Conclusion This was the first randomized trial on the treatment of MTSS in athletes in a non-military setting. No differences were found between the groups for the time to complete a running program. Trial registration CCMO; NL23471.098.08 PMID:22464032
Priority setting: what constitutes success? A conceptual framework for successful priority setting.
Sibbald, Shannon L; Singer, Peter A; Upshur, Ross; Martin, Douglas K
2009-03-05
The sustainability of healthcare systems worldwide is threatened by a growing demand for services and expensive innovative technologies. Decision makers struggle in this environment to set priorities appropriately, particularly because they lack consensus about which values should guide their decisions. One way to approach this problem is to determine what all relevant stakeholders understand successful priority setting to mean. The goal of this research was to develop a conceptual framework for successful priority setting. Three separate empirical studies were completed using qualitative data collection methods (one-on-one interviews with healthcare decision makers from across Canada; focus groups with representation of patients, caregivers and policy makers; and Delphi study including scholars and decision makers from five countries). This paper synthesizes the findings from three studies into a framework of ten separate but interconnected elements germane to successful priority setting: stakeholder understanding, shifted priorities/reallocation of resources, decision making quality, stakeholder acceptance and satisfaction, positive externalities, stakeholder engagement, use of explicit process, information management, consideration of values and context, and revision or appeals mechanism. The ten elements specify both quantitative and qualitative dimensions of priority setting and relate to both process and outcome components. To our knowledge, this is the first framework that describes successful priority setting. The ten elements identified in this research provide guidance for decision makers and a common language to discuss priority setting success and work toward improving priority setting efforts.
Setting Goals for Achievement in Physical Education Settings
ERIC Educational Resources Information Center
Baghurst, Timothy; Tapps, Tyler; Kensinger, Weston
2015-01-01
Goal setting has been shown to improve student performance, motivation, and task completion in academic settings. Although goal setting is utilized by many education professionals to help students set realistic and proper goals, physical educators may not be using goal setting effectively. Without incorporating all three types of goals and…
Raykar, Nakul P; Yorlets, Rachel R; Liu, Charles; Goldman, Roberta; Greenberg, Sarah L M; Kotagal, Meera; Farmer, Paul E; Meara, John G; Roy, Nobhojit; Gillies, Rowan D
2016-01-01
Introduction 5 billion people around the world do not have access to safe, affordable, timely surgical care. This series of qualitative interviews was launched by The Lancet Commission on Global Surgery (LCoGS) with the aim of understanding the contextual challenges—the specific circumstances—faced by surgical care providers in low-resource settings who care for impoverished patients, and how those providers overcome these challenges. Methods From January 2014 to February 2015, 20 LCoGS collaborators conducted semistructured interviews with 148 surgical providers in low-resource settings in 21 countries. Stratified purposive sampling was used to include both rural and urban providers, and reputational case selection identified individuals. Interviewers were trained with an implementation manual. Following immersion into de-identified texts from completed interviews, topical coding and further analysis of coded texts was completed by an independent analyst with periodic validation from a second analyst. Results Providers described substantial financial, geographic and cultural barriers to patient access. Rural surgical teams reported a lack of a trained workforce and insufficient infrastructure, equipment, supplies and banked blood. Urban providers face overcrowding, exacerbated by minimal clinical and administrative support, and limited interhospital care coordination. Many providers across contexts identified national health policies that do not reflect the realities of resource-poor settings. Some findings were region-specific, such as weak patient–provider relationships and unreliable supply chains. In all settings, surgical teams have created workarounds to deliver care despite the challenges. Discussion While some differences exist between countries, the barriers to safe surgery and anaesthesia are overall consistent and resource-dependent. Efforts to advance and expand global surgery must address these commonalities, while local policymakers can tailor responses to key contextual differences. PMID:28588976
Gonzalez, Miriam L.; Melgar, Mario; Homsi, Maysam; Shuler, Ana; Antillon-Klussmann, Federico; Matheu, Laura; Ramirez, Marylin; Grant, Michael M.; Lowther, Deborah L.; Relyea, George; Caniza, Miguela A.
2017-01-01
E-learning has been widely used in the infection control field and has been recommended for use in hand hygiene (HH) programs by the World Health Organization. Such strategies are effective and efficient for infection control, but factors such as learner readiness for this method should be determined to assure feasibility and suitability in low- to middle-income countries. We developed a tailored, e-learning, Spanish-language HH course based on the WHO guidelines for HH in healthcare settings for the pediatric cancer center in Guatemala City. We aimed to identify e-readiness factors that influenced HH course completion and evaluate HCWs’ satisfaction. Pearson’s chi-square test of independence was used to retrospectively compare e-readiness factors and course-completion status (completed, non-completed, and never-started). We surveyed 194 HCWs for e-readiness; 116 HCWs self-enrolled in the HH course, and 55 responded to the satisfaction survey. Most e-readiness factors were statistically significant between course-completion groups. Moreover, students were significantly more likely to complete the course if they had a computer with an Internet connection (P=0.001) and self-reported comfort with using a computer several times a week (p=0.001) and communicating through online technologies (p=0.001). Previous online course experience was not a significant factor (p=0.819). E-readiness score averages varied among HCWs, and mean scores for all e-readiness factors were significantly higher among medical doctors than among nurses. Nearly all respondents to the satisfaction survey agreed that e-learning was as effective as the traditional teaching method. Evaluating HCWs’ e-readiness is essential while integrating technologies into educational programs in low- to middle-income countries. PMID:29147140
Gonzalez, Miriam L; Melgar, Mario; Homsi, Maysam; Shuler, Ana; Antillon-Klussmann, Federico; Matheu, Laura; Ramirez, Marylin; Grant, Michael M; Lowther, Deborah L; Relyea, George; Caniza, Miguela A
2016-01-01
E-learning has been widely used in the infection control field and has been recommended for use in hand hygiene (HH) programs by the World Health Organization. Such strategies are effective and efficient for infection control, but factors such as learner readiness for this method should be determined to assure feasibility and suitability in low- to middle-income countries. We developed a tailored, e-learning, Spanish-language HH course based on the WHO guidelines for HH in healthcare settings for the pediatric cancer center in Guatemala City. We aimed to identify e-readiness factors that influenced HH course completion and evaluate HCWs' satisfaction. Pearson's chi-square test of independence was used to retrospectively compare e-readiness factors and course-completion status (completed, non-completed, and never-started). We surveyed 194 HCWs for e-readiness; 116 HCWs self-enrolled in the HH course, and 55 responded to the satisfaction survey. Most e-readiness factors were statistically significant between course-completion groups. Moreover, students were significantly more likely to complete the course if they had a computer with an Internet connection (P=0.001) and self-reported comfort with using a computer several times a week (p=0.001) and communicating through online technologies (p=0.001). Previous online course experience was not a significant factor (p=0.819). E-readiness score averages varied among HCWs, and mean scores for all e-readiness factors were significantly higher among medical doctors than among nurses. Nearly all respondents to the satisfaction survey agreed that e-learning was as effective as the traditional teaching method. Evaluating HCWs' e-readiness is essential while integrating technologies into educational programs in low- to middle-income countries.
Velavan, K; Kannan, V Sadesh; Ahamed, A Saneem; Abia, V Roshmi; Elavarasi, E
2015-08-01
Vestibuloplasty is the procedure for shallow vestibule, prior to the prosthesis. Usually, vestibuloplasty is carried out in patients with completely edentulous arches. There are multiple techniques of vestibuloplasty described in the review of literature. However, it has not been emphasized on isolated shallow vestibule. This article describes our experience in the isolated or localized vestibuloplasty for a partially edentulous individual with a shallow vestibule pertaining to a single missing tooth.
A proof of the conjecture on the twin primes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuo-ling, Zhou
2016-06-08
In this short note, we have proved the conjecture on twin primes using some thoughts of the set theory. Firstly, using the original sieve method and a new notation(concept)introduced by myself, the conjecture on twin primes is summed up as an elementary successive limit, afterwards we form a subsequence of positive integers,and using it,we prove that the successive limits are commutative and complete the proof of the conjecture on twin primes We also give a more straightforward proof of the conjecture.
1989-03-31
Roessler, R., & Crosley, A.P. (1959). Ego strength and length of recovery from infectious mononucleosis . Journal of Nervous and Mental Disease, 128...operational settings. The development of methods of reducing the effects of infectious disease will progress more rapidly if high risk individuals can be...Na.y recruits (n . 130 and n - 253) who volun- teered to participate in a study of risk factors for infectious disease completed personality measures at
Vibrational multiconfiguration self-consistent field theory: implementation and test calculations.
Heislbetz, Sandra; Rauhut, Guntram
2010-03-28
A state-specific vibrational multiconfiguration self-consistent field (VMCSCF) approach based on a multimode expansion of the potential energy surface is presented for the accurate calculation of anharmonic vibrational spectra. As a special case of this general approach vibrational complete active space self-consistent field calculations will be discussed. The latter method shows better convergence than the general VMCSCF approach and must be considered the preferred choice within the multiconfigurational framework. Benchmark calculations are provided for a small set of test molecules.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
NASA Astrophysics Data System (ADS)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei
2014-06-01
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions, while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N^2). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.
Fast and anisotropic flexibility-rigidity index for protein flexibility and fluctuation analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opron, Kristopher; Xia, Kelin; Wei, Guo-Wei, E-mail: wei@math.msu.edu
Protein structural fluctuation, typically measured by Debye-Waller factors, or B-factors, is a manifestation of protein flexibility, which strongly correlates to protein function. The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions required in the theory of continuum elasticity with atomic rigidity, which is a new multiscale formalism for describing excessively large biomolecular systems. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions,more » while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N{sup 2}). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms permit adaptive Hessian matrices, from a completely global 3N × 3N matrix to completely local 3 × 3 matrices. These 3 × 3 matrices, despite being calculated locally, also contain non-local correlation information. Eigenvectors obtained from the proposed aFRI algorithms are able to demonstrate collective motions. Moreover, we investigate the performance of FRI by employing four families of radial basis correlation functions. Both parameter optimized and parameter-free FRI methods are explored. Furthermore, we compare the accuracy and efficiency of FRI with some established approaches to flexibility analysis, namely, normal mode analysis and Gaussian network model (GNM). The accuracy of the FRI method is tested using four sets of proteins, three sets of relatively small-, medium-, and large-sized structures and an extended set of 365 proteins. A fifth set of proteins is used to compare the efficiency of the FRI, fFRI, aFRI, and GNM methods. Intensive validation and comparison indicate that the FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for α-carbons of the HIV virus capsid (313 236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.« less
Toward the automated generation of genome-scale metabolic networks in the SEED.
DeJongh, Matthew; Formsma, Kevin; Boillot, Paul; Gould, John; Rycenga, Matthew; Best, Aaron
2007-04-26
Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis). We have implemented our tools and database within the SEED, an open-source software environment for comparative genome annotation and analysis. Our method sets the stage for the automated generation of substantially complete metabolic networks for over 400 complete genome sequences currently in the SEED. With each genome that is processed using our tools, the database of common components grows to cover more of the diversity of metabolic pathways. This increases the likelihood that components of reaction networks for subsequently processed genomes can be retrieved from the database, rather than assembled and verified manually.
Larsen, Lawrence C; Shah, Mena
2016-01-01
Although networks of environmental monitors are constantly improving through advances in technology and management, instances of missing data still occur. Many methods of imputing values for missing data are available, but they are often difficult to use or produce unsatisfactory results. I-Bot (short for "Imputation Robot") is a context-intensive approach to the imputation of missing data in data sets from networks of environmental monitors. I-Bot is easy to use and routinely produces imputed values that are highly reliable. I-Bot is described and demonstrated using more than 10 years of California data for daily maximum 8-hr ozone, 24-hr PM2.5 (particulate matter with an aerodynamic diameter <2.5 μm), mid-day average surface temperature, and mid-day average wind speed. I-Bot performance is evaluated by imputing values for observed data as if they were missing, and then comparing the imputed values with the observed values. In many cases, I-Bot is able to impute values for long periods with missing data, such as a week, a month, a year, or even longer. Qualitative visual methods and standard quantitative metrics demonstrate the effectiveness of the I-Bot methodology. Many resources are expended every year to analyze and interpret data sets from networks of environmental monitors. A large fraction of those resources is used to cope with difficulties due to the presence of missing data. The I-Bot method of imputing values for such missing data may help convert incomplete data sets into virtually complete data sets that facilitate the analysis and reliable interpretation of vital environmental data.
Monte Carlo explicitly correlated second-order many-body perturbation theory
NASA Astrophysics Data System (ADS)
Johnson, Cole M.; Doran, Alexander E.; Zhang, Jinmei; Valeev, Edward F.; Hirata, So
2016-10-01
A stochastic algorithm is proposed and implemented that computes a basis-set-incompleteness (F12) correction to an ab initio second-order many-body perturbation energy as a short sum of 6- to 15-dimensional integrals of Gaussian-type orbitals, an explicit function of the electron-electron distance (geminal), and its associated excitation amplitudes held fixed at the values suggested by Ten-no. The integrals are directly evaluated (without a resolution-of-the-identity approximation or an auxiliary basis set) by the Metropolis Monte Carlo method. Applications of this method to 17 molecular correlation energies and 12 gas-phase reaction energies reveal that both the nonvariational and variational formulas for the correction give reliable correlation energies (98% or higher) and reaction energies (within 2 kJ mol-1 with a smaller statistical uncertainty) near the complete-basis-set limits by using just the aug-cc-pVDZ basis set. The nonvariational formula is found to be 2-10 times less expensive to evaluate than the variational one, though the latter yields energies that are bounded from below and is, therefore, slightly but systematically more accurate for energy differences. Being capable of using virtually any geminal form, the method confirms the best overall performance of the Slater-type geminal among 6 forms satisfying the same cusp conditions. Not having to precompute lower-dimensional integrals analytically, to store them on disk, or to transform them in a nonscalable dense-matrix-multiplication algorithm, the method scales favorably with both system size and computer size; the cost increases only as O(n4) with the number of orbitals (n), and its parallel efficiency reaches 99.9% of the ideal case on going from 16 to 4096 computer processors.
Lavrentyev, A I; Rokhlin, S I
2001-04-01
An ultrasonic method proposed by us for determination of the complete set of acoustical and geometrical properties of a thin isotropic layer between semispaces (J. Acoust. Soc. Am. 102 (1997) 3467) is extended to determination of the properties of a coating on a thin plate. The method allows simultaneous determination of the coating thickness, density, elastic moduli and attenuation (longitudinal and shear) from normal and oblique incidence reflection (transmission) frequency spectra. Reflection (transmission) from the coated plate is represented as a function of six nondimensional parameters of the coating which are determined from two experimentally measured spectra: one at normal and one at oblique incidence. The introduction of the set of nondimensional parameters allows one to transform the reconstruction process from one search in a six-dimensional space to two searches in three-dimensional spaces (one search for normal incidence and one for oblique). Thickness, density, and longitudinal and shear elastic moduli of the coating are calculated from the nondimensional parameters determined. The sensitivity of the method to individual properties and its stability against experimental noise are studied and the inversion algorithm is accordingly optimized. An example of the method and experimental measurement for comparison is given for a polypropylene coating on a steel foil.
Kumar Deb, Debojit; Sarkar, Biplab
2017-01-18
The torsional potential of OH and SH rotations in 2-hydroxy thiophenol is systematically studied using the MP2 ab initio method. The outcome of state-of-the-art calculations is used in the investigation of the structures and conformational preferences of 2-hydroxy thiophenol and aims at further interaction studies with a gas phase water molecule. SCS-MP2 and CCSD(T) complete basis set (CBS) limit interaction energies for these complexes are presented. The SCS-MP2/CBS limit is achieved using various two-point extrapolation methods with aug-cc-pVDZ and aug-cc-pVTZ basis sets. The CCSD(T) correction term is determined as the difference between CCSD(T) and SCS-MP2 interaction energies calculated using a smaller basis set. The effect of counterpoise correction on the extrapolation to the CBS limit is discussed. The performance of DFT based wB97XD, M06-2X and B3LYP-D3 functionals is tested against the benchmark energy from ab initio calculations. Hydrogen bond interactions are characterized by carrying out QTAIM, NCIPLOT, NBO and SAPT analyses.
Prior-based artifact correction (PBAC) in computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heußer, Thorsten, E-mail: thorsten.heusser@dkfz-heidelberg.de; Brehm, Marcus; Ritschl, Ludwig
2014-02-15
Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form ofmore » a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data.« less
Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.
Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim
2017-12-12
In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.
The Motivation to Volunteer: A Systemic Quality of Life Theory
ERIC Educational Resources Information Center
Shye, Samuel
2010-01-01
A new approach to volunteer motivation research is developed. Instead of asking what motivates the volunteer (accepting "any" conceptual category), we ask to what extent volunteering rewards the individual with each benefit taken from a complete set of possible benefits. As a "complete set of benefits" we use the 16 human functioning modes…
Slot, Tegan; Charpentier, Karine; Dumas, Geneviève; Delisle, Alain; Leger, Andy; Plamondon, André
2009-01-01
The aim of the study was to evaluate the effect of forearm support provided by the Workplace Board on perceived tension, comfort and productivity among pregnant and non-pregnant female computer workers. Ten pregnant and 18 non-pregnant women participated in the study. Participants completed three sets of tension/discomfort questionnaires at two week intervals. The first set was completed prior to any workstation intervention; the second set was completed after two weeks working with an ergonomically adjusted workstation; the third set was completed after two weeks working with the Workplace Board integrated into the office workstation. With the Workplace Board, decreased perceived tension was reported in the left shoulder, wrist and low back in non-pregnant women only. The Board was generally liked by all participants, and increased comfort and productivity in all areas, with the exception of a negative effect on productivity of general office tasks. The board is suitable for integration in most office workstations and for most users, but has no special benefits for pregnant women.
NASA Astrophysics Data System (ADS)
Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew
2006-07-01
Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.
NASA Astrophysics Data System (ADS)
Castillo, María V.; Iramain, Maximiliano A.; Davies, Lilian; Manzur, María E.; Brandán, Silvia Antonia
2018-02-01
Dieldrin was characterized by using Fourier Transform infrared (FT-IR) and Raman (FT-Raman), Ultraviolet-Visible (UV-Visible) spectroscopies. The structural and vibrational properties for dieldrin in gas phase and in aqueous solution were computed combining those experimental spectra with hybrids B3LYP and WB97XD calculations by using the 6-31G* and 6-311++G** basis sets. Here, the experimental available Hydrogen and Carbon Nuclear Magnetic Resonance (1H and 13C NMR) for dieldrin were also used and compared with those predicted by calculations. The B3LYP/6-311++G** method generates the most stable structures while the results have demonstrated certain dependence of the volume and dipole moment values with the method, size of the basis set and, with the studied media. The lower solvation energy for dieldrin (-32.94 kJ/mol) is observed for the higher contraction volume (-2.4 Å3) by using the B3LYP/6-31G* method. The NBO studies suggest a high stability of dieldrin in gas phase by using the WB97XD/6-31G* method due to the n→π* and n*→π* interactions while the AIM analyses support this high stability by the C18⋯H26 and C14⋯O7 contacts. The different topological properties observed in the R5 ring suggest that probably this ring plays a very important role in the toxics properties of dieldrin. The frontier orbitals show that when dieldrin is compared with other toxics substances the reactivity increases in the following order: CO < STX < dieldrin < C6Cl6
An ontology-based method for secondary use of electronic dental record data
Schleyer, Titus KL; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P.; Liu, Kaihong; Hernandez, Pedro
A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance. PMID:24303273
High-throughput gene mapping in Caenorhabditis elegans.
Swan, Kathryn A; Curtis, Damian E; McKusick, Kathleen B; Voinov, Alexander V; Mapa, Felipa A; Cancilla, Michael R
2002-07-01
Positional cloning of mutations in model genetic systems is a powerful method for the identification of targets of medical and agricultural importance. To facilitate the high-throughput mapping of mutations in Caenorhabditis elegans, we have identified a further 9602 putative new single nucleotide polymorphisms (SNPs) between two C. elegans strains, Bristol N2 and the Hawaiian mapping strain CB4856, by sequencing inserts from a CB4856 genomic DNA library and using an informatics pipeline to compare sequences with the canonical N2 genomic sequence. When combined with data from other laboratories, our marker set of 17,189 SNPs provides even coverage of the complete worm genome. To date, we have confirmed >1099 evenly spaced SNPs (one every 91 +/- 56 kb) across the six chromosomes and validated the utility of our SNP marker set and new fluorescence polarization-based genotyping methods for systematic and high-throughput identification of genes in C. elegans by cloning several proprietary genes. We illustrate our approach by recombination mapping and confirmation of the mutation in the cloned gene, dpy-18.
O'Sullivan, Julie Lorraine; Gellert, Paul; Hesse, Britta; Jordan, Laura-Maria; Möller, Sebastian; Voigt-Antons, Jan-Niklas; Nordheim, Johanna
2018-02-01
Information and Communication Technologies (ICTs) could be useful for delivering non-pharmacological therapies (NPTs) for dementia in nursing home settings. To identify technology-related expectations and inhibitions of healthcare professionals associated with the intention to use ICT-based NPTs. Cross-sectional multi-method survey. N = 205 healthcare professionals completed a quantitative survey on usage and attitudes towards ICTs. Additionally, N = 11 semi-structured interviews were conducted. Participants were classified as intenders to use ICTs (53%), non-intenders (14%) or ambivalent (32%). A MANCOVA revealed higher perceived usefulness for intenders compared to non-intenders and ambivalent healthcare professionals (V =.28, F(12, 292)= 3.94, p <.001). Qualitative interviews revealed generally high acceptance of ICTs in the workplace. Furthermore, benefits for residents emerged as a key requirement. Staff trainings should stress specific benefits for residents and healthcare professionals to facilitate successful implementation and acceptance of ICTs in nursing home settings.
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
Tang, Chuanning; Lew, Scott
2016-01-01
Abstract In vitro protein stability studies are commonly conducted via thermal or chemical denaturation/renaturation of protein. Conventional data analyses on the protein unfolding/(re)folding require well‐defined pre‐ and post‐transition baselines to evaluate Gibbs free‐energy change associated with the protein unfolding/(re)folding. This evaluation becomes problematic when there is insufficient data for determining the pre‐ or post‐transition baselines. In this study, fitting on such partial data obtained in protein chemical denaturation is established by introducing second‐order differential (SOD) analysis to overcome the limitations that the conventional fitting method has. By reducing numbers of the baseline‐related fitting parameters, the SOD analysis can successfully fit incomplete chemical denaturation data sets with high agreement to the conventional evaluation on the equivalent completed data, where the conventional fitting fails in analyzing them. This SOD fitting for the abbreviated isothermal chemical denaturation further fulfills data analysis methods on the insufficient data sets conducted in the two prevalent protein stability studies. PMID:26757366
Efficient Wide Baseline Structure from Motion
NASA Astrophysics Data System (ADS)
Michelini, Mario; Mayer, Helmut
2016-06-01
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
Rapid determination of minoxidil in human plasma using ion-pair HPLC.
Zarghi, A; Shafaati, A; Foroutan, S M; Khoddam, A
2004-10-29
A rapid, simple and sensitive ion-pair high-performance liquid chromatography (HPLC) method has been developed for quantification of minoxidil in plasma. The assay enables the measurement of minoxidil for therapeutic drug monitoring with a minimum detectable limit of 0.5 ng ml(-1). The method involves simple, one-step extraction procedure and analytical recovery was complete. The separation was performed on an analytical 150 x 4.6 mm i.d. microbondapak C18 column. The wavelength was set at 281 nm. The mobile phase was a mixture of 0.01 M sodium dihydrogen phosphate buffer and acetonitrile (60:40, v/v) containing 2.5 mM sodium dodecyl sulphate adjusted to pH 3.5 at a flow rate of 1 ml/min. The column temperature was set at 50 degrees C. The calibration curve was linear over the concentration range 2-100 ng ml(-1). The coefficients of variation for inter-day and intra-day assay were found to be less than 8%.
Wilcox, Sara; Parra-Medina, Deborah; Felton, Gwen M.; Poston, Mary Elizabeth; McClain, Amanda
2011-01-01
Background Primary care providers are expected to provide lifestyle counseling, yet many barriers exist. Few studies report on adoption and implementation in routine practice. This study reports training, adoption, and implementation of an intervention to promote physical activity (PA) and dietary counseling in community health centers. Methods Providers (n = 30) and nurses (n = 28) from 9 clinics were invited to participate. Adopters completed CD-ROM training in stage-matched, patient-centered counseling and goal setting. Encounters were audio recorded. A subsample was coded for fidelity. Results Fifty-seven percent of providers and nurses adopted the program. Provider counseling was seen in 66% and nurse goal setting in 58% of participant (N = 266) encounters, although audio recordings were lower. Duration of provider counseling and nurse goal setting was 4.9 ± 4.5 and 7.3 ± 3.8 minutes, respectively. Most PA (80%) and diet (94%) goals were stage-appropriate. Although most providers discussed at least 1 behavioral topic, some topics (eg, self-efficacy, social support) were rarely covered. Conclusions A sizeable percentage of providers and nurses completed training, rated it favorably, and delivered lifestyle counseling, although with variable fidelity. With low implementation cost and limited office time required, this model has the potential to be disseminated to improve counseling rates in primary care. PMID:20864755
A Global Health Research Checklist for clinicians.
Sawaya, Rasha D; Breslin, Kristen A; Abdulrahman, Eiman; Chapman, Jennifer I; Good, Dafina M; Moran, Lili; Mullan, Paul C; Badaki-Makun, Oluwakemi
2018-04-19
Global health research has become a priority in most international medical projects. However, it is a difficult endeavor, especially for a busy clinician. Navigating the ethics, methods, and local partnerships is essential yet daunting.To date, there are no guidelines published to help clinicians initiate and complete successful global health research projects. This Global Health Research Checklist was developed to be used by clinicians or other health professionals for developing, implementing, and completing a successful research project in an international and often low-resource setting. It consists of five sections: Objective, Methodology, Institutional Review Board and Ethics, Culture and partnerships, and Logistics. We used individual experiences and published literature to develop and emphasize the key concepts. The checklist was trialed in two workshops and adjusted based on participants' feedback.
Measuring the patient experience in primary care
Slater, Morgan; Kiran, Tara
2016-01-01
Abstract Objective To compare the characteristics and responses of patients completing a patient experience survey accessed online after e-mail notification or delivered in the waiting room using tablet computers. Design Cross-sectional comparison of 2 methods of delivering a patient experience survey. Setting A large family health team in Toronto, Ont. Participants Family practice patients aged 18 or older who completed an e-mail survey between January and June 2014 (N = 587) or who completed the survey in the waiting room in July and August 2014 (N = 592). Main outcome measures Comparison of respondent demographic characteristics and responses to questions related to access and patient-centredness. Results Patients responding to the e-mail survey were more likely to live in higher-income neighbourhoods (P = .0002), be between the ages of 35 and 64 (P = .0147), and be female (P = .0434) compared with those responding to the waiting room survey; there were no significant differences related to self-rated health. The differences in neighbourhood income were noted despite minimal differences between patients with and without e-mail addresses included in their medical records. There were few differences in responses to the survey questions between the 2 survey methods and any differences were explained by the underlying differences in patient demographic characteristics. Conclusion Our findings suggest that respondent demographic characteristics might differ depending on the method of survey delivery, and these differences might affect survey responses. Methods of delivering patient experience surveys that require electronic literacy might underrepresent patients living in low-income neighbourhoods. Practices should consider evaluating for nonresponse bias and adjusting for patient demographic characteristics when interpreting survey results. Further research is needed to understand how primary care practices can optimize electronic survey delivery methods to survey a representative sample of patients. PMID:27965350
Evaluation method of the performance of kinetic inhibitor for clathrate hydrate
NASA Astrophysics Data System (ADS)
Muraoka, M.; Susuki, N.; Yamamoto, Y.
2016-12-01
As a part of a Japanese National hydrate research program (MH21, funded by METI), we study the formation of tetrahydrofuran (THF) clathrate hydrate from polyvinylpyrrolidone (PVP) aqueous solution as a function of growth rate V and adsorbed PVP concentration c using the unidirectional growth technique. This study aims to propose a simple method for evaluating the performance of kinetic hydrate inhibitors (KHIs) for the clathrate hydrate-aqueous solution system. The degree of super cooling ΔT calculated from the growth-induced interface shift under steady-state conditions was used for evaluating the KHIs performance. Using this method, a single experimental run can be completed within 3.5 h of the compulsory nucleation by setting V = 5 μm s-1. We believe this method is useful for screening various KHIs and clarifying the inhibition mechanism of KHIs.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
NASA Technical Reports Server (NTRS)
Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E
1951-01-01
A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle.
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-02-26
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions.
Computationally Efficient 2D DOA Estimation with Uniform Rectangular Array in Low-Grazing Angle
Shi, Junpeng; Hu, Guoping; Zhang, Xiaofei; Sun, Fenggang; Xiao, Yu
2017-01-01
In this paper, we propose a computationally efficient spatial differencing matrix set (SDMS) method for two-dimensional direction of arrival (2D DOA) estimation with uniform rectangular arrays (URAs) in a low-grazing angle (LGA) condition. By rearranging the auto-correlation and cross-correlation matrices in turn among different subarrays, the SDMS method can estimate the two parameters independently with one-dimensional (1D) subspace-based estimation techniques, where we only perform difference for auto-correlation matrices and the cross-correlation matrices are kept completely. Then, the pair-matching of two parameters is achieved by extracting the diagonal elements of URA. Thus, the proposed method can decrease the computational complexity, suppress the effect of additive noise and also have little information loss. Simulation results show that, in LGA, compared to other methods, the proposed methods can achieve performance improvement in the white or colored noise conditions. PMID:28245634
Adaptive variational mode decomposition method for signal processing based on mode characteristic
NASA Astrophysics Data System (ADS)
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
A depth-first search algorithm to compute elementary flux modes by linear programming
2014-01-01
Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068
Genome wide predictions of miRNA regulation by transcription factors.
Ruffalo, Matthew; Bar-Joseph, Ziv
2016-09-01
Reconstructing regulatory networks from expression and interaction data is a major goal of systems biology. While much work has focused on trying to experimentally and computationally determine the set of transcription-factors (TFs) and microRNAs (miRNAs) that regulate genes in these networks, relatively little work has focused on inferring the regulation of miRNAs by TFs. Such regulation can play an important role in several biological processes including development and disease. The main challenge for predicting such interactions is the very small positive training set currently available. Another challenge is the fact that a large fraction of miRNAs are encoded within genes making it hard to determine the specific way in which they are regulated. To enable genome wide predictions of TF-miRNA interactions, we extended semi-supervised machine-learning approaches to integrate a large set of different types of data including sequence, expression, ChIP-seq and epigenetic data. As we show, the methods we develop achieve good performance on both a labeled test set, and when analyzing general co-expression networks. We next analyze mRNA and miRNA cancer expression data, demonstrating the advantage of using the predicted set of interactions for identifying more coherent and relevant modules, genes, and miRNAs. The complete set of predictions is available on the supporting website and can be used by any method that combines miRNAs, genes, and TFs. Code and full set of predictions are available from the supporting website: http://cs.cmu.edu/~mruffalo/tf-mirna/ zivbj@cs.cmu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
From Curves to Trees: A Tree-like Shapes Distance Using the Elastic Shape Analysis Framework.
Mottini, A; Descombes, X; Besse, F
2015-04-01
Trees are a special type of graph that can be found in various disciplines. In the field of biomedical imaging, trees have been widely studied as they can be used to describe structures such as neurons, blood vessels and lung airways. It has been shown that the morphological characteristics of these structures can provide information on their function aiding the characterization of pathological states. Therefore, it is important to develop methods that analyze their shape and quantify differences between their structures. In this paper, we present a method for the comparison of tree-like shapes that takes into account both topological and geometrical information. This method, which is based on the Elastic Shape Analysis Framework, also computes the mean shape of a population of trees. As a first application, we have considered the comparison of axon morphology. The performance of our method has been evaluated on two sets of images. For the first set of images, we considered four different populations of neurons from different animals and brain sections from the NeuroMorpho.org open database. The second set was composed of a database of 3D confocal microscopy images of three populations of axonal trees (normal and two types of mutations) of the same type of neurons. We have calculated the inter and intra class distances between the populations and embedded the distance in a classification scheme. We have compared the performance of our method against three other state of the art algorithms, and results showed that the proposed method better distinguishes between the populations. Furthermore, we present the mean shape of each population. These shapes present a more complete picture of the morphological characteristics of each population, compared to the average value of certain predefined features.
Schoeffl, Harald; Lazzeri, Davide; Schnelzer, Richard; Froschauer, Stefan M.
2013-01-01
Background Microsurgical techniques are considered standard procedures in reconstructive surgery. Although microsurgery by itself is defined as surgery aided by optical magnification, there are no guidelines for determining in which clinical situations a microscope or loupe should be used. Therefore, we conducted standardized experiments to objectively assess the impact of optical magnification in microsurgery. Methods Sixteen participants of microsurgical training courses had to complete 2 sets of experiments. Each set had to be performed with an unaided eye, surgical loupes, and a regular operating microscope. The first set of experiments included coaptation of a chicken femoral nerve, and the second set consisted of anastomosing porcine coronary arteries. Evaluation of the sutured nerves and vessels were performed by 2 experienced microsurgeons using an operating microscope. Results The 16 participants of the study completed all of the experiments. The nerve coaptation and vascular anastomoses exercises showed a direct relationship of error frequency and lower optical magnification, meaning that the highest number of microsurgical errors occurred with the unaided eye. For nerve coaptation, there was a strong relationship (P<0.05) between the number of mistakes and magnification, and this relationship was very strong (P<0.01) for vascular anastomoses. Conclusions We were able to prove that microsurgical success is directly related to optical magnification. The human eye's ability to discriminate potentially important anatomical structures is limited, which might be detrimental for clinical results. Although not legally mandatory, surgeries such as reparative surgery after hand trauma should be conducted with magnifying devices for achieving optimal patient outcomes. PMID:23532716